Planet Jabber

August 23, 2019

yaxim

Happy Birthday, yaxim! 10 Years of Android XMPP

A decade ago, today, the first yaxim commit was created, making it now officially half as old as XMPP. Since then, much has happened both in the XMPP ecosystem and on the Android side of things.

2009: The Beginning

Back in 2009, the Android platform was still brand new, and it was lacking a Free IM client. There were rumors and announcements, but nobody had posted working code yet.

The first specific hint was a German slide deck by Sven and Chris, presenting their semester project, YAXIM - Yet Another XMPP Instant Messenger.

Some friendly emails later, a GitHub project got created, and the coding went on. At the end of the year, a lightning talk was given at the 26C3, showing how yaxim looked back then:

26C3 slide: state of affairs

26C3 slide: screenshots

In the early days, it was a huge challenge to reliably deliver messages, and things were improving only slowly.

Significant Steps

In 2010, YAXIM got renamed into yaxim, to look more like a name and less like a yelling acronym.

In 2013, Bruno was launched as yaxim’s cute little sibling, to appeal to children and everybody who loves animals. It has attained almost 2000 active fans by now.

Also in 2013, the yax.im XMPP service was launched, mainly to make on-boarding from yaxim and Bruno easier, but also to have a stable and reliable service usable for mobile clients.

Finally, in 2016, yaxim got its current yak-oriented logo.

yaxim feature graphic

A Review In Numbers

From day 1, yaxim was more or less a hobby project, with no commercial backing and no full time developers. Over the years, the code base has grown slowly, with 2015 being an especially slow year.

yaxim commit history

Some contributors were more active than others… 😉

yaxim Lines of Code history

Even though yaxim still has more total installs on Google Play than Conversations, the latter is said to be the go-to client on Android, and very popular among federation nerds. Still, at least in the last three years, there was no decline in the number of devices that yaxim is installed on (Google doesn’t provide stats before 2016):

3 years of devices

Current Challenges

The technical foundation of yaxim (Smack 3.x, ActionBarSherlock) became rather dated, and currently much effort is going into making yaxim look great on modern (Material design) Android devices, and to support modern features like interactive permission dialogs and battery saving, as well as the Matrix (with the support being currently dormant.

Please check out the Google Play beta channel to stay up-to-date with the most recent developments, and let’s see how the next decade will develop!

August 23, 2019 10:43

Erlang Solutions

XMPP Protocol Use-Cases and Guide | Erlang Solution blog

Who will find this interesting

If you’re considering XMPP for your project but you are unsure if it can provide the functionality you need, you’ll eventually end up here:

http://xmpp.org/extensions/

I’m pretty sure you’ll be quite intimidated by such a long list of extensions. In some cases it will be pretty easy to find what you need. If you look for PubSub functionality, you’ll quickly notice “Publish-Subscribe”. Sometimes it’s not so obvious though. XMPP developers already know that in order to synchronise outgoing messages between several devices, they have to enable “Message Carbons”. Not very intuitive, isn’t it?

The aim of this blog post is to guide you towards proper XMPP technologies and solutions, given your specific use cases. I’ve worked with and deployed solutions powered by XMPP, such as MongooseIM, for years; so let me be your personal Professor Oak, providing a perfect “companion(s)” to work with and begin your journey in XMPP world. There are almost 400 XEPs, will you catch them all? ;)

The length of this article is caused not by a complexity of descriptions but by a count of use cases and features. :)

All numbers and information on implementation status are valid for March 2017.

What can you expect here?

For every use case, I will list XMPP features you definitely should consider using. Each one of them will be briefly described. The goal here is to understand the usefulness without reading whole specification. Besides that, each item will include MongooseIM’s module name providing discussed extension and example client implementations.

What you won’t find in this post

This post won’t cover any XMPP basics. It assumes you either know them already (what are JID, C2S, S2S, IQ, stanzas, stream etc.) or you intend to learn them from some other guide, like the excellent (iOS) tutorial written by Andres Canal Part 1, Part 2). It’s more of a cookbook, not Cooking For Dummies.

ToC

  1. I’m creating …
    1.1 … a mobile application.
    1.2 … a desktop application.
    1.3 … a web application.
    1.4 … an application that just can’t speak XMPP.
  2. I need my application to …
    2.1 … show message status like Facebook does.
    2.2 … provide message archive to end users.
    2.2.1 I’d like to have a full text search feature.
  3. … display inbox (a list of conversations with unread count and a last message).
  4. … allow file transfers and media sharing between users.
    4.1 P2P
    4.2 File Upload
  5. … support groupchats …
    5.1 … and I need precise presence tracking in each group.
    5.2 … and I don’t need to broadcast presence information in each group.
  6. … be compatible with other public XMPP setups.
  7. … present the same view of each conversation on every user’s device.
  8. … allow users to block each other.
  9. … support end-to-end encryption.
  10. … be a part of Internet of Things.
  11. … receive push notifications.
  12. … publish messages to groups of subscribers.

1. Creating …

Before we proceed to more specific requirements, it’s important to identify crucial standards based on your application type.

1.1 … a mobile application.

Smartphones are omnipresent nowadays. It’s a fact. The whole software market considered, mobile apps are an important medium between various companies and their customers. Some of them are the actual products (games, communicators, car navigations, etc.), not only a “channel”. If you’re going to develop a mobile application, you will need…

XEP-0198 Stream Management

It’s an extension that provides two features actually. One of them is stanza delivery confirmation (both server and client side), what allows early detection of broken connections or malfunctioning network layer. The other one is stream resumption. It makes reconnection faster by reducing the round-trip count and relieves the client of fetching message archive as pending, unacknowledged messages will be retransmitted from server buffer.

It is enabled by default in MongooseIM and supported by major client libs like Smack or XMPPFramework. From a client developer perspective, it’s pretty transparent because the whole extension is enabled with a single flag or method call.

MUC Light, MIX, Push notifications, HTTP File Upload

These extensions are especially useful in the mobile environment. Why? With MUC Light and MIX you gain control over presence broadcasting - you can spare your users frequent radio wakeups and bandwidth usage. These extensions are a significant improvement over traditional presence-driven group chats.

Virtually every app on our smartphones uses push notifications. Some are useful and some are just annoying commercials. It doesn’t matter - it’s almost certain you’ll want them integrated with your XMPP service.

HTTP File Upload allows asynchronous media sharing, which is much more convenient in the case of group chats and doesn’t require both parties to stay online during the transfer.

These are just brief summaries. You can find more details further in this post.

1.2. … a desktop application.

Despite mobile phones’ expansion and software products exclusive for them (Instagram, Snapchat, Tinder, etc.), nobody can deny the comfort of UI operated with a mouse, keyboard, or tablet. Some apps simply require processing power that portable devices can’t provide. If your code is going to be executed on desktops PCs and laptops, you’ll appreciate…

There are no extensions that are strictly essential for desktop apps. Everything depends on specific applications. Just bear in mind that the standards important for mobile apps are generally useful for desktop ones too, only less critical.

1.3. … a web application.

As the days of heavy browser incompatibility (thank you, standardisation!) and Flash technology abuse are long gone, web applications are a great way to provide cross-platform solutions. It’s not only easier to reach more platforms but also to ensure the users are always running the most up-to-date version.

If you’re a web developer, you’re going to connect to the XMPP server via BOSH or Websockets.

Websockets

Websockets technology allow to upgrade an HTTP connection to an asynchronous, full-duplex, binary one (a bit of simplification but it’s the essence). It means that XMPP stanzas can be exchanged almost as efficiently as over a raw TCP connection (Websockets add small overhead of integer with packet size). It’s the recommended protocol for single-page apps.

Note: You can combine Stream Management’s resumption with Websockets, although it will still be slower than BOSH’s session pause.

Warning: Websockets are not implemented by old browsers. If you have to support any outdated clients, take a look at this table first.

BOSH

Defined in XEP-0124: Bidirectional-streams Over Synchronous HTTP (BOSH) and XEP-0206: XMPP Over BOSH. This protocol encapsulates XMPP stanzas in HTTP requests. It also simulates asynchronous, bidirectional communication by issuing long polling requests from client to the server to retrieve live data. What does it mean in practical terms?

A persistent connection may be maintained but in general BOSH is designed to deal with interrupted connections. It’s a perfect solution for web apps that trigger browser navigation. On such event, all connections made by e.g. JavaScript from browser are closed but the BOSH session survives it on the server side (not forever of course) and the client can quickly and efficiently resume the session after page reload.

The protocol is pretty verbose though, so if you don’t need this feature, go for Websockets.

1.4. … an application that just can’t speak XMPP.

You probably think that I’m crazy; why use XMPP with XMPP-less clients? Let’s change the way we think about XMPP for a moment. Stop considering XML the only input data format the XMPP server accepts. What if I told you that it’s possible to restrict XML to the server’s routing core and just make REST calls from any application? Tempting?

It’s a non-standard approach and it hasn’t been documented by XSF (yet), but MongooseIM already exposes most important functionalities via REST. Check out this and this document to find out more.

2. I need my application to …

Now we continue to more specific use cases.

2.1. … show message status like Facebook does.

By message status we mean following states (plus live notifications):

  1. Not sent to server yet.
  2. Acknowledged by the server.
  3. Delivered to the recipient.
  4. Displayed by the recipient.
  5. User is composing a message.
  6. User has stopped composing a message.

(1) and (2) are handled by Stream Management. It’s pretty obvious - before receiving an ack from the server, you are in (1); and ack confirms the message entered state (2).

We can deal with (3) and (4) by using XEP-0333: Chat Markers. These are special stanzas sent by a recipient to the original sender. There are dedicated markers for received and displayed events.

(5) and (6) are provided by XEP-0085: Chat State Notifications. It is up to a client to send updates like <composing/> and <paused/> to the interlocutor.

2.2. … provide message archive to end users.

Virtually every modern chat application maintains conversation history both for 1-1 communication and group chats. It can remind you of a promise you’ve made, be evidence in a divorce case, or help in police investigation.

XMPP provides two protocols for accessing message archives. The older one, XEP-0136 Message Archiving is used by hardly anyone, because it’s difficult to implement and overloaded with features. It has been superseded by more modern XEP-0313 Message Archive Management, which is the current standard.

There is one caveat though - its syntax changed significantly between versions, so it’s common for libraries and servers to explicitly state what versions are supported by the specific piece of software. These are 0.2, 0.3 and 0.4(.1) and 0.5. MongooseIM supports all of them in mod_mam module. If you choose another server, make sure its MAM implementation is compatible with your client library. Smack and XMPPFramework use 0.4 syntax.

2.2.1. I’d like to have a full text search feature.

Although standard Message Archive Management doesn’t specify any queries for full text search, it remains flexible enough to create such requests on top of the existing ones.

In MongooseIM this feature is still in experimental phase and has been recently merged into master branch. It’s not supported in any client library yet, so you have to construct a custom MAM query to do full text searches. Take a look at the PR description, It’s not that difficult. :)

2.3. … display inbox (a list of conversations with unread count and a last message).

Unfortunately there are no open solutions providing this feature. XMPP community is in the process of discussing and creating the specification of Inbox functionality. Erlang Solutions is designing a XEP proposal, which you can view here.

A quasi-inbox is available as a part of experimental standard Bind 2.0. It doesn’t cover all possible use-cases but a list of unread messages is what you actually need for optimal UX after establishing a connection. This feature is already under development in MongooseIM project.

In the meantime, you can build an inbox view by persisting last known archived message ID or timestamp and query Message Archive Management for all messages that came later. When you fetch them all, you can build an inbox. Unfortunately this is not very efficient and that’s why the community needs a new standard.

2.4. … allow file transfers and media sharing between users.

Almost everyone loves to share cat pictures and every modern IM solution provides means to do this. Various file transfer techniques in the XMPP world can be grouped in two categories: P2P connections and file upload.

The former involves establishing a direct connection between two clients, sometimes with a bit of a help from a TURN server. It ensures that data won’t get stored on any intermediate servers. Obviously, it requires less effort from the service provider because it’s easier and cheaper to set up a TURN service than to maintain a proper media server (or pay for storage in the cloud).

File upload is much more efficient when sharing media with a group. It doesn’t require both parties to remain online for the transfer duration.

2.4.1. P2P

Now, you DO have a choice here. There are a couple of XEPs, describing various P2P transfer initiation methods. XEP-0047 In-Band Bytestreams (IBB) is guaranteed to work in every network, because it sends data (Base64-encoded) via IQs. So if you can reach the XMPP service, you can transfer files. It may be slow and not very convenient but it will work.

Let’s carry on. You can transfer media via bytestreams external to XMPP. The P2P session is negotiated via XMPP but it’s only the “signalling” part. There are quite a few XEPs describing various negotiation and transmission protocols, so I will highlight specific implementations rather than listing all of the names which would only confuse readers who just want to send some bytes.

  • XMPPFramework: Look for XMPPIncomingFileTransfer and XMPPOutgoingFileTransfer. They support SOCKS5 and In-Band Bytestreams.
  • Smack: Everything begins with FileTransferManager. It supports SOCKS5 and In-Band Bytestreams as well.

2.4.2. File Upload

Unless you already have a dedicated media server that exposes an API to perform uploads and downloads, you should definitely take a look at XEP-0363 File Upload. It defines standard stanzas to request upload slots and respective download links. It is XMPP server’s responsibility to allocate the slots and return the links to the client.

Unfortunately this extension is not widely supported yet. You can find it in XMPPFramework but not in Smack yet. In the case of MongooseIM, it’s already available with Amazon S3 backend (with more storage plugins to come!).

2.5. … support group chats …

A couple of years ago it was really simple - there was only one kind of group chat supported in the XMPP world. Today we have three standards, two of them being maintained by XSF and one published by Erlang Solutions. MIX (XEP-0369), doesn’t have any implementations yet and as a standard it changes very frequently, so it is not described in this post.

2.5.1. … and I need precise presence tracking in each group.

If you need IRC-like experience where users have certain roles in a room and client disconnection triggers leaving the room, then classic XEP-0045 Multi-User Chat will work for you. It has its disadvantages (frequent presence broadcast may impact UX and consume processing power or connection throughput) but fits the use case, where accurate presence information is important. It is provided by MongooseIM’s mod_muc (other major servers implement it as well) and is supported by all mainstream client libs.

2.5.2. … and I don’t need to broadcast presence information in each group.

Erlang Solutions’ Multi-User Chat Light is a protocol derived from real world use cases, where groups doesn’t care about presences and full member list is always available to room members. It has some strong assumptions (like only 2 types of affiliation or rooms being joinable only by invite) but is designed to reduce round-trips, expose powerful API (e.g. room creation + configuration + adding new members in one request) and be easy to work with. Check it out and see if it fits in your application. Server implementation is currently exclusive to MongooseIM (mod_muc_light) and respective plugins are available in Smack and XMPPFramework.

2.6. … be compatible with other public XMPP setups.

Even some proprietary installations do integrate with open XMPP world (like GTalk and Facebook at some point), so if this is your use case as well, the first important thing to remember is that no custom stanzas may leave your cluster. By custom I mean anything that is not covered by any XSF-approved XEP. Additionally, you will really benefit from using XEP-0030 Service Discovery protocol a lot, because you can never be sure what is the supported feature set on the other end. It is used to query both clients and servers. Virtually every client and server supports it. In case of MongooseIM, the base module is mod_disco.

2.7. … present the same view of each conversation on every user’s device.

I use Facebook messenger on multiple devices and I really expect it to display the same shopping list I got from my wife on both my desktop and my mobile phone. It usually breaks message order but anyway - at least the list is there.

The problem is actually a bit more complex, because you have to take care of synchronising both online and offline devices.

Online devices can ask the server to forward all incoming/outgoing messages, even if they originate from or are addressed to some other resource of the same user. It is achieved by enabling XEP-0280 Message Carbons. On the client side it’s easy - just enable the feature after authenticating and the server will do the rest. It’s supported by MongooseIM in mod_carboncopy module. You can find respective implementations in Smack, XMPPFramework, Stanza.io and many others, since it’s a very simple, yet powerful extension.

If you want to fetch everything that happened while a specific device was offline for a while, just query XEP-0313 Message Archive Management (see “… provide message archive to end users.” section).

2.8. … allow users to block each other.

You just can’t stand your neighbour nagging you via IM to turn down the volume while Kirk Hammett is performing his great solo? Block him. Now. XMPP can help you with it. In two ways actually.

Yes, XMPP features two standards that deal with blocking: XEP-0016 Privacy Lists and the simpler XEP-0191 Blocking Command. The former allows users to create pretty precise privacy rules, like “don’t send outgoing presences to JID X” or “accept IQs only from JIDs in my roster”. If you need such a fine grained control, take a look at MongooseIM’s mod_privacy. On the client side it is supported by the likes of Smack and XMPPFramework.

Blocking Command is much simpler but most setups will find it sufficient. When a client blocks a JID, no stanza will be routed from the blockee to the blocker. Period. MongooseIM (mod_blocking), Smack and XMPPFramework have it.

2.9. … support end-to-end encryption.

When Alice wants to send a message to Bob… no, we’ve all probably seen this classic example too many times already. :)

There is no “one size fits all” when it comes to E2E encryption. The first tradeoff you’ll have to make is to decide whether you want new users devices to be able to decrypt old messages, or do you prefer to have a property of forward secrecy. For a full comparison between available encryption methods, let’s take a look at the table published by OMEMO authors:

Legacy Open PGP Open PGP OTR OMEMO
Multiple Devices Yes Yes No Yes
Offline Messages Yes Yes No Yes
File Transfer Yes Non-standard Non-standard Yes
Verifiability No Yes Yes Yes
Deniability Yes No Yes Yes
Forward Secrecy No No Yes Yes
Server Side Archive Yes Yes No No
Per Message Overhead High High Low Medium

It’s difficult to find an open library that supports any of these methods. Gajim communicator has an OMEMO plugin. Smack and XMPPFramework don’t support E2E encryption in their upstream versions. If you’re going to use E2E encryption in your application, most probably you’ll have to implement it on your own. Good thing is there are standards you can base your code on.

2.10. … be a part of Internet of Things.

We are a peculiar bunch. We use semiconductors to build machines that do heavy number crunching for us, deliver messages in a blink of an eye and control robotic arms with precision far beyond ours. A desire has awoken in us to go even deeper and augment everything with electronics. To let everything communicate with each other.

If you’re designing a fridge microcontroller that is supposed to fetch results from bathroom scales and lock the door for 8h for every excessive BMI point, you’ll need…

  • XEP-0323 Internet of Things - Sensor Data
  • XEP-0324 Internet of Things - Provisioning
  • XEP-0325 Internet of Things - Control
  • XEP-0326 Internet of Things - Concentrators
  • XEP-0347 Internet of Things - Discovery

Unfortunately there are no public implementations of these standards. I wish it was easier but it seems you just can’t avoid reading these XEPs, picking the most suitable parts and creating your own implementation.

To find out more and become an active member of XMPP IoT community, check out IoT Special Interest Group.

2.11. … receive push notifications.

Push Notifications are (usually) doing a great service to mobile devices’ battery life. It is great indeed that a single TCP connection is maintained by OS, while apps can remain hibernated in the background. It is natural to every chat application to deliver notifications to the end user, even when a smartphone is resting in the pocket. How does XMPP cooperate with popular services like APNS or GCM?

It depends.

Although it’s not difficult to find XEP-0357 Push Notifications, it deserves some explanation. This specification is very generic. It assumes the existence of another XMPP-enabled “App server” that handles push notifications further. Although implementations could be found (e.g. in MongooseIM or Prosody server), it is very common for commercial installations to use custom protocols to provide push tokens and send PN packets directly to the respective services (APNS, GCM, SNS…)

2.12. … publish messages to groups of subscribers.

Publish-subscribe is an ancient and extremely useful pattern. XMPP got its own PubSub specification quite early (first versions were published in 2003) and even though the protocol is pretty verbose (and a bit complicated), for the basic usage you’ll need to learn only the most important aspects: there are nodes in the PubSub service where publishers can publish data. Nodes can group other nodes or remain plain leaves. That’s the whole story. The rest is about configuration, access control, etc.

XEP-0060 Publish-Subscribe is implemented in every major XMPP piece of software. In case of MongooseIM, it’s handled by mod_pubsub. You can find in popular client libraries as well: Smack, XMPPFramework or Stanza.io.

Set sail!

Now, if you feel brave enough, you can dive into this looong, official list of XEPs. These documents are designed to provide a precise information for libs and server developers. In your daily routine you’ll probably won’t care about every server-side edge case or whether some information is encapsulated in element X or Y. These are perfect reference guides for if you’re stuck somewhere or need to tap into lib’s plugin internals.

I’d recommend one more intermediate step though. Browse servers’ and clients’ feature lists published by virtually every project on their webpages. They usually skip (or enumerate them in separate, more detailed docs) minor items and highlight the features that may be important to you. This way you’ll expand your XMPP vocabulary and knowledge, while postponing the stage where reading XEPs is unavoidable, thus making the learning curve less steep.

It won’t be long until you realise that XMPP is your friend, I promise. :)

Stay tuned for Part 2! In the meantime:

  1. Brush up on your XMPP basics with our guide to building an iOS app from scratch using XMPPFramework (parts 1 and 2)
  2. Learn more about MongooseIM, our XMPP based open source mobile messaging platform.

August 23, 2019 08:22

August 22, 2019

Ignite Realtime Blog

Announcing the XMPP-Strings Testframework

@Flow wrote:

jXMPP, a FOSS XMPP-base library, has just been extended by a testframework for “XMPP-Strings”. Currently, this is limited to Local-, Domain- and Resourceparts and the various XMPP address types, but may be extended to future Strings found in XMPP-land.

The testframework comes with a corpus of known valid and invalid XMPP addresses (JIDs). I am happy about feedback and contributions to the corpus. Hopefully it becomes a useful resource for XMPP developers. You can find the corpus under

The testframework currently uses the following libraries to prepare and enforce the Strings:

  • ICU4J
  • GNU libidn
  • xmpp.rocks’s PRECIS

a further minimalistic implementation, called “simple”, is part of jXMPP.

You can run the testframework simply by

git clone https://github.com/igniterealtime/jxmpp.git
cd jxmpp
./test-xmpp-strings

if you have gradle and python3 in your path.

Please note that ICU4J and libidn (and “simple”) do not currently provide the required PRECIS profile and hence the older Stringprep profiles are used.

Feel free to use the provided infrastructure to test your own implementation. Thanks to JNI and the polyglot programming feature of the GraalVM, it should be easily possible. Please contact me if you need assistance.

Posts: 1

Participants: 1

Read full topic

by @Flow Florian Schmaus at August 22, 2019 15:50

August 21, 2019

Jérôme Poisson

SàT PubSub 0.3.0 has been released

SàT Pubsub is a server independent PEP/PubSub XMPP service, which aims to be complete and universal.

This project was born due to the fact that it is difficult to have feature-full PEP/PubSub on every XMPP servers, and even if it is the case there can be huge delays before seeing new functionalities implementation, or they can be difficult to extend.

The "Salut à Toi" project being using extensively XMPP PubSub functionalities, there was 2 ways to work around the issue:

  • to concentrate on a particular XMPP server, to recommend it, and if possible make it evolve in the wanted direction.
    This would mean being blocked on a specific XMPP server implementation and taking the risk to have incorrectly working (or not working at all) functionalities on other servers

  • create a server independent component, by using XMPP extensions to have a privileged access to server

The latter option has been retained, SàT PubSub uses XEPs (XMPP extensions) Namespace Delegation (XEP-0355) and Privileged Entity (XEP-0356) to be able to offer advanced features and PEP.

Beside the use of "SàT" in the name (which is due to its origins), you don't need to install or use Salut à Toi with this component, and it can be used by any XMPP compatible software.

SàT PubSub was already implementing MAM and RSM (allowing to do searches in archives and to use paginations to retrieve new items), the 0.3.0 version also brings:

  • the presence access model
  • +notify handling
  • notion of administrators, which are identifiers ("JID") with privileges
  • affiliations
  • the possibility to find items sorted by creation or modification date, thanks to Order-By (XEP-0413)
  • the "Node Schema" experimental feature, to associate data types to a node using Data Forms (XEP-0004). This feature is notably used by Salut à Toi for tickets
  • the "Serial IDs" experimental feature, which set new items identifiers using increments (1, 2, 3, etc.) instead of random values. This is notably useful for tickets.
  • the "PubSub Admin" experimental feature, which let administrators publish items by specifying an other publisher. This can be used to restore nodes backups.
  • the "consistent publisher" experimental feature which, once activated in node settings, keep the original publisher when an item is updated by the node owner or an administrator. This permit to update an item without preventing the original publisher do modify it himself or herself (for instance while editing a blog comment or updating a ticket status).
  • a config file can now be used with SàT PubSub, avoiding to have to specify settings – including the password – entirely on the command line. The same file as for Salut à Toi (sat.conf) is used, the settings for SàT PubSub must be set in the [pubsub] section.
  • a new documentation

You'll find more details in the CHANGELOG.

Version 0.4 development has already begun with a working Python 3 port, completing the port of the whole Salut à Toi ecosystem.

To install SàT PubSub, you just have to enter pip install sat_pubsub in a Python 2 virtual environment, check documentation for more details.

by goffi at August 21, 2019 18:12

ProcessOne

Real-time Stack #24

ProcessOne curates the Real-time Radar – a newsletter focusing on articles about technology and business aspects of real-time solutions. Here are the articles we found interesting in Issue #24. To receive this newsletter straight to your inbox on the day it is published, subscribe here.

MQTT for system administrators and for the IoT

They say MQTT is a PUB/SUB protocol for the Internet of Things, which it was originally designed for, but it’s also well suited for monitoring machines and services. Presentation given at BSDCan2019 in Ottawa.

Z-Wave to MQTT

Fully configurable Z-Wave to MQTT gateway and control panel. The Z-Wave to MQTT add-on allows you to decouple your Z-Wave network from your Home Assistant instance by leveraging your MQTT broker.

Migrating trashserver.net from Prosody to Ejabberd

Author writes: “Yesterday I moved my old Prosody setup to a new Ejabberd-based XMPP server setup. I’d like to leave you a few notes on why and how I did that. There ware multiple reasons for my decision to give Ejabberd a chance.”

Emoji avatars for personal website

Author writes: “My previous avatar was almost 3 years old, and I was getting tired of it. I decided to replace my avatar on my website for my IndieWebCamp Austin hack day project. But if you know me, you know I can’t do anything the easy way.”

Gotify server

A simple server for sending and receiving messages, written in Go and licensed under the MIT license.

LabPlot 2.6 released with MQTT support

The next release of LabPlot is announced! One big new feature is the support for the MQTT protocol. With LabPlot 2.6 is is possible now to read live data from MQTT brokers. This feature was contributed by Ferencz K. during Google Summer of Code 2018.

by Marek Foss at August 21, 2019 10:52

August 20, 2019

Tigase Blog

BeagleIM 3.2 and Siskin IM 5.2 released

New versions of XMPP clients for Apple's mobile and desktop platforms have been released.

Keep reading for the list of changes.

by wojtek at August 20, 2019 19:07

August 19, 2019

João Duarte

11th gsoc week report


Nearly out of time! Some final adjustments were made this week, documentation was accordingly updated and preparations are being made to the deliver GSoC's official final submission. Check out the latest changes in this week's blogpost!

Events

Code

1 - prosodyctl: fixed a typo
2 - util.startup: changed the way util.paths.complement_lua_path was being accessed
3 - util.paths: fixed another typo
4 - util.pluginloader: Added a new path to the variable local_names
5 - core.configmanager: Removed code related to complement_lua_path
6 - prosodyctl: install, remove and list commands now use the call_luarocks function
7 - util.prosodyctl: Removed the check_flags and execute_command functions
8 - util.prosodyctl: call_luarocks function now sets the directory variable itself

Documentation

These are planned changes to the already existing documentation that you can access at prosody's website. For example, you can check prosodyctl's page through the link:  https://prosody.im/doc/prosodyctl
Changes:
doc.prosodyctl: Got rid of the text related to the --tree flag, which has been removed
doc.depends: Added Luarocks as a dependency, and related info
doc.plugins_directory: Added info related to the plugin installer's directory configuration
doc.plugin_installer: Gave the .md extension and rewrote this file
doc.installing_modules: Added info about the plugin installer's directory and which paths it checks

Difficulties/Considerations

After talking with my mentor, we ended up removing the --tree flag from the installer, for now, as it was not clear how useful it would be.
Initially we though it would be nice to also give the user the possibility of specifying the working directory from the command line. Why would him want to have new plugins at different directories for the same prosody server though, since he can use the installer now to easily list and install/remove them, while have everything organized at a single directory?
Behind the scenes, it would also require more work than what I had done. When started, the prosody server goes through a startup process. Some of the things it does are:
  • complement lua's package.path and package.cpath
  • check for plugins in a number of directories, that are part of the default/configured plugin paths
When using commands with the --tree flag, prosody would have to go through these startup tasks again, so that these new paths are taken into consideration, in future operations. We would need to, in a way, reload prosody each time one of these commands were run, so that all the paths could be updated.
At the end of the day, it seems to be too much hassle for a seemingly redundant feature in project that is meant to be lightweight and simple.
This might be a thing to reconsider in the future, but, for now, the user can use the default path for community modules or set a new one at the configuration file, similar to what we already do for the core plugin directories.

See it in action!

You can check this video out, were I install mod_message_logging. I'm using this one here, because it is simple to demonstrate that the plugin is really installed and being used by the software.

Future goals

GSoC's final submission is in demand, so this week we are tackling that, so that everything is nicely presented at the end. See you soon!

by João Duarte (noreply@blogger.com) at August 19, 2019 15:46

August 18, 2019

Jérôme Poisson

SàT progress note 2019-W33

Hi there,

after the release I've got some vacations and I've made a break, that's why I've skipped the last few weeks progress notes, but now I'm back.

The release was a big thing, after 3 years of developments, and many new exiting stuff to show (Cagou, which is also running on Android, the advanced file sharing, photo albums, events, etc.), it is also the first "general audience" version, meaning that it's the first one which is usable for everybody, not only people with technical background. Even if there is still work to do on UX and stability, we're on the right track.

The Android version of Cagou is not stable yet, but there are several things which were not fixable easily under Python 2 (there is notably a bug on file opening with Python for Android which is not reproducible on Python 3). I hope to have feedbacks to make next version really enjoyable.

I was really happy to finally do this release, and now I can move forward and in particular do the Python 3 port.

During my holidays I've spend some time doing the first steps of the port (just a few hours here and there), and I could quite quickly get the backend and frontends running. Once back, this week, I've reviewed it and done some polishing before committing, and I'm happy to say that the development version of the backend and all the frontends now run on Python 3. Some features are still not working, but most things are here and running.

This unlock many things, and I'm very looking forward to used them : asyncio with async/await syntax, Brython, Transcrypt, and many libraries which are Python 3 only.

Thanks to better error handling, I could also already fix some issues not seen with Python 2, and we can already appreciate a performance boost (specially visible with tickets).

Once that done, I've released SàT Pubsub 0.3.0 (a release note will come soon about this), and started the developments of 0.4.0 with… Python 3 port : dev version of SàT Pubsub is now also running on Python 3. That means that all SàT ecosystem is now Python 3 only.

Beside that, I've also made 3 pull requests to see Cagou, Primitivus and jp on Flathub, but there are some modifications to do there before they can be merged.

That's all, see you next week.

by goffi at August 18, 2019 19:22

August 16, 2019

Peter Saint-Andre

Cooperative Thinking

A few years ago I did a bunch of research and thinking about alternative forms of organizational structure, focused especially on cooperatives (at the time I was thinking about setting up a co-op to run the Jabber.org messaging service). The approach I found most interesting was funding a co-op through something called a direct public offering. This can be done for much less money and with much greater control than VC funding or angel investing or whatever - it's basically a kind of more advanced crowdfunding. Companies that have done this include My Trail (an outdoor clothing company in Boulder, Colorado), Real Pickles (a food co-op in Massachusetts), and Equal Exchange (the original fair trade coffee company, also located in Massachusetts). Other co-ops I investigated are self-funded and haven't pursued the DPO route (e.g., Namasté Solar, Plausible Labs, and Isthmus Engineering)...

August 16, 2019 00:00

August 15, 2019

ProcessOne

Real-time Enterprise #23

ProcessOne curates the Real-time Radar – a newsletter focusing on articles about technology and business aspects of real-time solutions. Here are the articles we found interesting in Issue #23. To receive this newsletter straight to your inbox on the day it is published, subscribe here.

Six Ways IoT will Revolutionize Your Business

Technology continues to revolutionize the business world, and IoT is an excellent example of simple programs making a pivotal difference in business performance. Items connected to the internet can be used to gather data, automate tedious processes, and fulfill many other needs traditionally ignored.

Successful Migration to a Custom XMPP?Solution

This articles describes the challenges faced migrating from a third-party chat to a custom XMPP-based messaging solution for Forward Health, a UK based messaging solution for healthcare.

MQTT and CoAP: Security and Privacy Issues in IoT and IIoT Communication Protocols

Machine-to-machine communication protocols, which enable machines to “talk” with one another so that commands are communicated and data is transmitted, are indispensable to applications and systems that make use of the internet of things and the industrial internet of things.

Global Internet of Things Market Set to Reach $318bn by 2023

The global market for Internet of Things technologies will almost treble in size over the next five years, according to a new forecast.

What Are The Most Significant AI Advances We Will See In The Near Future?

What are the most significant AI advances we’ll see over the next few decades?

Tomorrows "General" AI Revolution Will Grow from Today's Technology

During his closing remarks at the I/O 2019 keynote last week, Jeff Dean, Google AI’s lead, noted that the company is looking at “AI that can work across disciplines,” suggesting the Silicon Valley giant may soon pursue Artificial general intelligence, a technology that eventually could match or exce

Blockchain is the Best Vehicle for IoT

IoT or Internet of Things is a much touted technology these days. All-pervasive, spanning multiple verticals, a humongous amount of data is being captured from all around us by millions of devices.

by Marek Foss at August 15, 2019 09:09

August 12, 2019

ProcessOne

GopherCon 2019 Highlights

I had the chance to attend once more the big main annual Go conference: Gophercon 2019. For the first time this year, location changed from Denver to San Diego. Ocean, mild climate and extraordinary venue for the party (on the USS Midway) contributed to a relax and friendly atmosphere.

GopherCon is a social event

Gophercon is a social event and the very nice organization made the experience very enjoyable.

Still, this is a tech event and the program is usually packed with great talks that are influential for the Go community for the coming year. Last year talk on Go 2 shaped the discussions of the community during the past year.

I have attended many great talks as well this year. As it is a multitrack conference, I could not see them all and I will have to catch up on video. That said, here is a quick overview of the one I found inspiring, elegant or thoughts-provoking.

Note: I will next focus on the two main conference days, but there is a lot more at GopherCon, from pre-conference workshops to the more informal community day, with lightning talks and hacking sessions with peers.

The talks

Day 1

Russ Cox has been setting the tone with an update on Go 2. It was less a surprise than last year, as it was more a recap of what happened on Go 2 since his last-year talk. The important things to note are that error changes are over for now, some of the experiments (like wrapper) getting added to Go, some being abandoned (like check then try experiment). Go dependency management has been progressing nicely, go modules becoming the standard in Go 1.13, along with the main Go module proxy. He finally let the generic discussion apart, to let Ian Lance Taylor present the new proposal on the second day.
You can read the companion blog post: Experiment, Simplify, Ship.

Google speakers were generally quite involved at the conference, presenting upcoming major improvements to the language, environment or tooling. I have generally found those talks to be very interesting. Rebecca Stambler talk on gopls, the official upcoming Go language server was promising. gopls should help provide high quality go support in all major text editors.

Regarding tooling, I really enjoyed Katie Hockman (also from Google) talk about Go Module Proxy. She described in great details the proxy protocol, but also the checksum database, that should increase the safety of the Go ecosystem. The checksum database is designed as a Merkle tree, so that it cannot be tempered with and is auditable.

This was not all about tooling. Elias Naur (independent programmer) introduced his work on Gio, a framework to write cross-platform UI in Go. This is a follow-up on his work on Go Mobile. He has done a tremendous job on the framework, but still a lot of work is remaining. The approach reminds me of Flutter, with Go replacing Dart as a core language. Still, given the size of the Flutter team, and that the Flutter project is back by Google. It will be a challenge to turn Gio into a mainstream framework. You can check his slides: Simple, Portable and Efficient Graphical Interfaces in Go

If you want to catch up on video, I recommend also recommend Eric Chiang talk “PKI for Gophers”. This is a nice how-to session on everything certificate-related in Go, packed with lots of examples. You can check out his slides here: PKI for Gophers

Day 1 session ended with the story of a new Go contributor, Oliver Stenbom (student), detailing his road from a quick fix to a real official patch.

We finally all gathered on the USS Midway, for a wonderful party.

Day 2

Highlight of day 2 was Ian Lance Taylor (Google) talk on Generics in Go. It was an overview of the new proposal for contract in Go, going from “Why” to “How”. I recommend watching the talk as a nice recap of the story of Go and generic. You can also read the companion blog post: Why Generics?. Finally, if you have some time to help with your feedback, you can read the full updated proposal by Ian Lance Taylor and Robert Griesemer: Contracts — Draft Design. I like the update proposal better. It is simpler and easier to understand than the previous one. While I am sure many small issues will be found during the implementation and testing phase, it now really feels the approach is close to what will end up being implemented in Go. I look forward to watching the iterations.

I enjoyed Aaron Schlesinger (Microsoft) talk on Athens proxy for Go modules. He made a good point explaining that the Go ecosystem is not centralized (like with NPM in Javascript world) and that the proxy architecture should be widely used for decentralization to be effective. I am totally convinced this is important. I am now running one on my laptop and we are planning to deploy our own Go module proxy based on Athens.

The day provided more interesting practical talks. I recommend watching Chris Hines (Comcast) talk on how they optimized their Go soft-real time streaming video service.

Jonathan Amsterdam (Google) introduced his work on upcoming Go tooling to help you make decision on major version changes, based on automatic analysis of API breaking changes.

Mat Ryer talk on Go best practices was very interesting, very practical. I recommend it, as most of his advices are common sense when you know about Go, but requires quite some time to discover on your own. You can watch the talk Mat gave on this topic at Gophercon EU: How I Write HTTP Web Services After 8 Years

I could not attend to Gabbi Fisher (Cloudflare) talk on sockets, but I heard good things about it and will definitely watch it when it is released in video.

To close my comments on the tech talks, if you are trying to have a large team adopting Go, you can check out the talks from Elena Morozova (Uber) and Jessica Lucci (Github). They shared two different points of view on how to make Go work at scale in large organisations.

The final highlight of the day was Johnny Boursiquot talk about Go community and how to keep a strong focus on diversity. It was a very personal and strong talk. This is an important topic. I am looking at getting involved into GoBrige and I wish more programming community would promote Bridge Foundry principles.

Conclusion

This was a great GopherCon. The community is managing to grow while keeping a friendly and welcoming vibe. The speakers were very good. It is a good sign for the Go language and I expect it will keep on rising in popularity as a tool to write maintainable and scalable server applications, microservices and API.

Are you a startup or a company that needs an efficient expert team to develop scalable, robust backend for your projects – fast?

We can help. We develop highly maintainable and scalable code in impressively short timeframes. Please contact us and we will help to accelerate your projects.

by Mickaël Rémond at August 12, 2019 15:42

João Duarte

10th gsoc week report


Last week has been spent mostly on writing good documentation and trying to see what else can be improved around whats done. Check out below, to see which doc files are about to be updated so far!

Prosody documentation:

/doc/prosodyctl- added command entries
/doc/prosodyctl - Added a reference to the plugin installer documentation page
/doc - Added a reference to the plugin installer documentation page
/doc/plugin_installer - Created this new page
/doc/plugin_installer - Added introduction, pointing to links regarding prosodyctl, core and community modules, installing modules manually
/doc/example_config - Added the new field for the plugin installer directory
/doc/configure - Added info about configuring the installer's path

Difficulties/Considerations

Not too sure what to do with /doc/installing_modules. I could update the installer info here, but I feel it needs a dedicated space. This space is still useful on its own, since core plugins are still coming installed in the old way.
I haven't been able to run my own version of prosody's website, which I'd like to do. Well, I was, but there is these problems with hyperlinks, that are explained in the website's readme file. Some writing tricks are needed to make them work, and seems like people usually do this via nginx, which I never use. I used apache, to test the installer fetching modules, but I've got no idea how to make the hyperlinks work there. I spent some time on this apache/nginx issue, but ended up moving on, as I feel it isn't really the important thing to focus. Now I'm just sending my mentors patch files, so we can decide if things are good to go or not.
I should try to make a true prosody package too. I've running things from source now, since it is the absolutely most convenient way for developers to work, but that isn't what the everyday user will do. Maybe I shouldn't do so though, the installer might be just included in the next major prosody's release. This is an extra and can be done after GSoC though.
GSoC is ending and I need to lookout for it. I feel I'm not been able to communicate as I should though. Maybe its just the stress. 
Hope this project can be really useful at the end O.o

Future goals

Review documentation
Check what can be improved, regarding docs
Keep checking for core improvements until merge time
Consider updating prosody's pacakge
Prepare final project webpage to showcase work, as requested by GSoC

by João Duarte (noreply@blogger.com) at August 12, 2019 15:23

Ignite Realtime Blog

Openfire 4.4.1 Release

@akrherz wrote:

The Ignite Realtime Community is happy to announce the promotion of release 4.4.1 of Openfire. This release signifies our effort to stablize a 4.4 branch of Openfire while work continues on the next feature release. A changelog exists denoting the 14 Jira issues resolved since the 4.4.0 release. This release should behave better with clustering enabled.

You can find downloads available with the following sha1sum values for the release artifacts.

bb6a6aabfac41d1615efc21d4d6bbf8d5b7ae473  openfire-4.4.1-1.i686.rpm
9b53d9785de7860868ee1e7d08ab66f1e7555672  openfire-4.4.1-1.noarch.rpm
17d76ae3f3da0579ca86ea514ae2a9962d5cd233  openfire-4.4.1-1.x86_64.rpm
6f0997af32aec39cf7250e1ede05f6ee010eb7fc  openfire_4.4.1_all.deb
64fa7f2fd6566ed204cba44ba88aa53d416bfb05  openfire_4_4_1_bundledJRE.exe
d99fd9d1753e5dea56df9db1d2e137b3a6660201  openfire_4_4_1_bundledJRE_x64.exe
3a09fe7480760cacf0e164363d718b893fbff995  openfire_4_4_1.dmg
0778df4566dc1f002f13c19865130c8b746d5540  openfire_4_4_1.exe
b3ebd42455d538867a01a4708bc03196c65b29f4  openfire_4_4_1.tar.gz
409c3e7a5ca477daeb3c11e88e31ac529f543666  openfire_4_4_1_x64.exe
b3d7c26e992ca0ef82aab962a0f2570ba33e539c  openfire_4_4_1.zip
940a2ee60a129a9ccdaf2f07bddcbc595bda9865  openfire_src_4_4_1.tar.gz
59db7e008c0ee76976343f484c706d268382b6c4  openfire_src_4_4_1.zip

Please let us know in the Community Forums of any issues you have and we are always looking for folks interested in helping out with development, documentation, and testing of Openfire. Considering stopping by our web support group chat and say hi!

For other release announcements and news follow us on Twitter

Posts: 2

Participants: 2

Read full topic

by @akrherz daryl herzmann at August 12, 2019 14:39

August 08, 2019

ProcessOne

ejabberd 19.08

We are pleased to announce ejabberd version 19.08. The main focus has been to further improve ease of use, consistency, performance, but also to start cleaning up our code base. As usual, we have kept on improving server performance and fixed several issues.

New Features and improvements

New authentication method using JWT tokens

You can now authenticate with JSON Web Token (JWT, see jwt.io for details).

This feature allows you to have your backend generate authentication tokens with a limited lifetime.

Generating JSON Web Key

You need to generate a secret key to be able to sign your JWT tokens.

To generate a JSON Web Key, you can for example use JSON Web Key Generator, or use your own local tool for production deployment.

The result looks like this:

{
  "kty": "oct",
  "use": "sig",
  "k": "PIozBFSFEntS_NIt...jXyok24AAJS8RksQ",
  "alg": "HS256"
}

Save you JSON Web Key in a file, e.g. secret.jwk.

Be careful: This key must never be shared or committed anywhere. With that key, you can generate credentials for any users on your server.

Configure ejabberd to use JWT auth

In ejabberd.yml change auth_method to jwt and add the jwt_key option pointing to secret.jwk:

auth_method: jwt
jwt_key: "/path/to/jwt/key"

Generate some JWT tokens

See jwt.io for an example of how to generate JSON Web Tokens. The payload must look like this:

{
  "jid": "test@example.org",
  "exp": 1564436511
}

And the encoded token looks like this:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJqaWQiOiJ0ZXN0QGV4YW1wbGUub3JnIiwiZXhwIjoxNTY0NDM2NTExfQ.SMfzCVy8Nv5jJM0tMg4XwymIf7pCAzY8qisOSQ5IAqI

Authenticate on XMPP using encoded token as password

Now, the user test@example.org can use this token as a password before 1564436511 epoch time (i.e. July 29, 2019 21:41:51 GMT).

New configuration validator

With ejabberd 19.08, we introduce a new configuration checker, giving more precise configuration guidance in case of syntax errors or misconfiguration. This configuration checker has also been released as an independent open source project: yconf.

The new configuration validator makes it possible to improve the configuration parsing. For example, it supports the following:

Better handling of Erlang atom vs string

There is not need to quote string to express the fact you want an atom in the configuration file: the new configuration validator handles the Erlang types mapping automatically.

More flexible ways to express timeouts

Now, all timeout values can be expanded with suffixes, e.g.

negotiation_timeout: 30s
s2s_timeout: 10 minutes
cache_life_time: 1 hour

If the suffix is not given, the timeout is assumed in seconds

Atomic configuration reload

The configuration will either be fully reloaded or rolled back.

Better, more precise error reporting

Here are a couple of examples of the kind of message that the new configuration validator can produce.

In the following example, the validator will check against a value range:

14:15:48:32.582 [critical] Failed to start ejabberd application: Invalid value of option loglevel: expected integer from 0 to 5, got: 6

More generally, it can check value against expected types:

15:51:34.007 [critical] Failed to start ejabberd application: Invalid value of option modules->mod_roster->versioning: expected boolean, got string instead

It will report invalid values and suggest fixes in case error was possibly due to a typo:

15:50:06.800 [critical] Failed to start ejabberd application: Invalid value of option modules->mod_pubsub->plugins: unexpected value: pepp. Did you mean pep? Possible values are: flat, pep

Prevent use of duplicate options

Finally, it will also properly fail on duplicate options and properly report the error:

15:56:35.227 [critical] Failed to start ejabberd application: Configuration error: duplicated option: s2s_use_starttls

It was a source of error as an option could shadow another one, possibly in an included file.

Improved scalability

We improve scalability of several modules:

Multi-User chat

MUC Room modules is more scalable, allowing supporting more rooms, by hibernating the room after a timeout. Hibernating means removing it from memory when not used and reloading it on-demand.

The MUC messages processing has also been changed to now properly handle all the available CPU cores. MUC room message handling is now faster and support larger throughput on SMP architectures

SQL database handling

We improve the way the SQL pool is managed to better handle high load. We also improved MySQL schema a bit to help with indexing.

Changed implementation of mod_offline option use_mam_for_storage

Previous version was trying to determine the range of messages that should be fetched from MAM by storing time when last user resource disconnected. But that had couple edge cases that could cause problems, for example in case of hardware node crash we could not store information about user disconnect and with that we didn’t have data to initiate MAM query.

The new version doesn’t track user disconnects, but simply ensure that we have timestamp of first message that is gonna be put in storage, after some measurements cost of that check with caching on top is not that costly, and as it is much more robust we decided to move to that safer approach.

New option captcha_url

Option captcha_host is now deprecated in favor of captcha_url. However, it’s not replaced automatically at startup, i.e. both options are supported with ‘captcha_url’ being the preferred one.

Deprecated ‘route_subdomains’ option

This option was introduced to fulfil requirements of RFC3920 10.3, but in practice it was very inconvenient and many admins were forced to change its value to ‘s2s’ (i.e. to behaviour that violates the RFC). Also, it seems like in RFC6120 this requirement no longer presents.

Those admins who used this option to block s2s with their subdomains can use ‘s2s_access’ option for the same purpose.

API changes

Renamed arguments from ‘Server’ to ‘Host’

Several ejabberd commands still used as argument name ‘Server’, instead of the more common ‘Host’. Such arguments have been renamed, and backward support allows old calls to work correctly.

The eight affected commands are:
– add_rosteritem
– bookmarks_to_pep
– delete_rosteritem
– get_offline_count
– get_presence
– get_roster
– remove_mam_for_user
– remove_mam_for_user_with_peer

If you are using these calls, please start updating your parameter names to Host when moving to ejabberd 19.08. You will thus use a more consistent API and be future proof.

Technical changes

Removed Riak support

Reasons:

  • Riak DB development is almost halted after Basho
  • riak-erlang-client is abandoned and doesn’t work correctly with OTP22
  • Riak is slow in comparison to other databases
  • Missing key ordering makes it impossible to implement range queries efficiently (e.g. MAM queries)

If you are using Riak, you can contact ProcessOne to get assistance migrating to DynamoDB, an horizontally scalable key value datastore made by Amazon.

Erlang/OTP requirement

Erlang/OTP 19.1 is still the minimum supported Erlang version for this release.

Database schema changes

There is no change to perform on the database to move from ejabberd 19.05 to ejabberd 19.08.
Please, make a backup before upgrading.

It means that an old schema for ejabberd 19.05 will work on ejabberd 19.08. However, if you are using MySQL, you should not that we changed the type of the server_host field to perform better with indexes. The change is not mandatory, but changing it to varchar(191) will produce more efficient indexes.

You can check the upgrade page for details: Upgrading from ejabberd 19.05 to 19.08

Download and install ejabberd 19.08

The source package and binary installers are available at ProcessOne. If you installed a previous version, please read ejabberd upgrade notes.
As usual, the release is tagged in the Git source code repository on Github. If you suspect that you’ve found a bug, please search or fill a bug report in Issues.


Full changelog
===========

Administration
– Improve ejabberd halting procedure
– Process unexpected Erlang messages uniformly: logging a warning
– mod_configure: Remove modules management

Configuration
– Use new configuration validator
– ejabberd_http: Use correct virtual host when consulting trusted_proxies
– Fix Elixir modules detection in the configuration file
– Make option ‘validate_stream’ global
– Allow multiple definitions of host_config and append_host_config
– Introduce option ‘captcha_url’
– mod_stream_mgmt: Allow flexible timeout format
– mod_mqtt: Allow flexible timeout format in session_expiry option

Misc
– Fix SQL connections leakage
– New authentication method using JWT tokens
– extauth: Add ‘certauth’ command
– Improve SQL pool logic
– Add and improve type specs
– Improve extraction of translated strings
– Improve error handling/reporting when loading language translations
– Improve hooks validator and fix bugs related to hooks registration
– Gracefully close inbound s2s connections
– mod_mqtt: Fix usage of TLS
– mod_offline: Make count_offline_messages cache work when using mam for storage
– mod_privacy: Don’t attempt to query ‘undefined’ active list
– mod_privacy: Fix race condition

MUC
– Add code for hibernating inactive muc_room processes
– Improve handling of unexpected iq in mod_muc_room
– Attach mod_muc_room processes to a supervisor
– Restore room when receiving a message or a generic iq for not started room
– Distribute routing of MUC messages across all CPU cores

PubSub
– Fix pending nodes retrieval for SQL backend
– Check access_model when publishing PEP
– Remove deprecated pubsub plugins
– Expose access_model and publish_model in pubsub#metadata

by Mickaël Rémond at August 08, 2019 17:32

August 07, 2019

Monal IM

Facebook ruins this again.

iOS 13 will restrict the ability of VOIP apps to run in the background. This will impact Monal. If you haven’t followed this saga it has been a sequence of changes from iOS 2, iOS 4 and then iOS 10. Looks like facebook has been abusing this privilege to monitor people in the background. This is why we can’t have nice things. I will test the impact on Monal and report back.

by Anu at August 07, 2019 03:56

August 06, 2019

hrxi

Ninth and tenth week: Interoperability fun

After finishing the SOCKS5 bytestreams transport for Jingle (S5B, XEP-0065, XEP-0260), I was asked whether I had already done interoperability testing with other clients for the fallback to in-band bytestreams.

flow: […] which other transports do you support? How far has interoperability testing between different implementations be done?

hrxi: only socks5 and ibb, conversations and gajim both work fine

flow: […] did you also test the socks5 to ibb fallback?

hrxi: no, that doesn’t work yet

flow: uh, maybe you find the time to implement that in the remaining two weeks, or are there other plans?

I guess I should’ve seen it coming at this point. Here’s how the fallback should work, according to the Jingle SOCKS5 XEP (XEP-0260#Fallback), excluding acks and simplified (dropping a lot of elements that need to be present):


1. <jingle action="session-initiate"> <transport xmlns="j-s5b" /> </jingle>
                                 ==>

2. <jingle action="session-accept"> <transport xmlns="j-s5b" /> </jingle>
                                 <==

3. <jingle action="transport-info"> <transport xmlns="j-s5b"> <candidate-error /> </transport> </jingle>
                                 <=>

4. <jingle action="transport-replace"> <transport xmlns="j-ibb" /> </jingle>
                                 ==>

5. <jingle action="transport-accept"> <transport xmlns="j-ibb" /> </jingle>
 				 <==

6. <open xmlns="ibb" />
                                 ==>

First, the normal Jingle session initiation happens, the client offering a file sends a session-initiate including SOCKS5 proxy information and waits. Then, when the receiving user accepts the file, or if the receiving client automatically accepts the file based on some conditions, it sends a session-accept including its SOCKS5 proxy information.

In point 3, we start to deviate from the normal “happy path”; in order to test the fallback, I made Dino send no local candidates and skipped checking all of the remote candidates, leading to both clients sending a transport-info with a candidate-error, meaning that none of the candidates offered by the peer work.

Now (4), according to the XEP, the initiator (left side) should send a transport-replace to change the transport method to in-band bytestreams.

In 5, the responder accepts this change.

After getting this response, the initiator is supposed to open the in-band bytestream (XEP-0261#Protocol flow) to complete the negotiation.

It took a few tries until I had Dino-Dino fallback working. The other clients were also fun:

Dino → Conversations The best case. Fails in step 2, but only because of some minor problems, the block size negotiation doesn’t take into account what Dino sent. Dino asks for a block size of 4096 bytes, Conversation is only allowed to lower this value but sets the consensus to 8192 bytes. I ignored this and the fact that Conversations did not send a sid and got a working fallback. Conversations#3515.

Conversations → Dino Fails in step 4, for some reason Conversations doesn't send a `transport-replace` after the SOCKS5 transport failed. [Conversations#3514](https://github.com/siacs/Conversations/issues/3514). EDIT Seems to be my mistake, couldn’t reproduce it when trying to test it with the Conversations developer.

Dino → Gajim Fails in a funny way, in step 5. Gajim responds to transport-replace with a session-accept. session-accept is only valid in response to session-initiate, so I don’t know how that happens. Some Conversations user already reported that issue. Gajim#9692.

Gajim → Dino Gets stuck in step 6, Gajim doesn’t open the in-band bytestream even though it is the initiator. Gajim#9784.

Of course I’m not sure that I diagnosed all of these issues correctly, these might be mistakes on my part and in my code, too. Let’s see how these issue threads develop.

EDIT And of course there was at least one mistake by me, Conversations → Dino seems to work.

by hrxi at August 06, 2019 00:00

August 05, 2019

João Duarte

9th GSoC Week report


Code review was in order last week, and some major improvements were made regarding my pseudo-spaghetti code, which now should be more readable and maintainable. Down below you can check out the complete list of changes made recently to prosody files, in order to polish the Installer!

Events

prosodyctl

1: Swapped prints for the show_message function
2: Removed a comment from the remove command
3: Rewrote the install command
4: Rewrote the remove command
5: Rewrote the list command, to make it cleaner and easier to work with
6: Install, remove and list now use the execute_command instead
7: Removed the auxiliary command enabled_plugins

core/configmanager

1: Added support to complement_lua_path

util//prosodyctl

1: Added the check_flags function - changeset
2: Added the call_luarocks function
3: call_luarocks command differentiates output, when being called by install/remove
4: The call_luarocks function can now also deal with the list command
5: Added the execute_command function
6: check_flags now always consider that the plugin is specified at the penultimate argument that it receives
7: Changed a comment

util/startup

1: The setup_plugindir function now uses the resolve_relative_path function
2: setup_plugindir now uses lfs.mkdir to check/create directories
3: Removed/rewrote comments at setup_plugindir
4: removed redundant variable
5: Now also check cpath for duplicates
6: Now uses complement_lua_path to deal with lua's path/cpath
7: Reorganized some code at setup_plugindir

util/paths

1: Added the function complement_lua_path
2: Refactored a variable, to avoid shadowing

Miscellaneous

make_repo: moved make_repo script into the tools/directory

Difficulties/Considerations

Nothing too special this week on the technical side, but time was cut a bit shorter due to personal reasons, so I need to do some catchup again! Google is stressing that deadlines are around, and time is flying reaaaaaaally fast.
The most important objective of cleaning code was given priority, and I think a good effort was made towards it. Some potential issues are still lying around in my mind, but documentation needs to be given a serious tackle.

Future goals

At the very least this week, the main focus should be on documentation and a couple of hours debugging some areas I believe might give problems.

by João Duarte (noreply@blogger.com) at August 05, 2019 15:05

August 04, 2019

Ignite Realtime Blog

Thread Dump 1.0.0 plugin released

@wroot wrote:

The Ignite Realtime community is happy to announce the release of a new plugin.

Thread Dump allows an admin to easily copy current thread dump for investigation or to provide it for support in these forums. It also enables Openfire to generate thread dumps automatically when certain conditions are met.

Your instance of Openfire should automatically display the availability of updates. Alternatively, you can download the new releases from the plugins downloads page.

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at August 04, 2019 13:03

August 03, 2019

Monal IM

iOS 3.8.1

I have pushed out the next iOS release. It should be out in a few days. No new features just a lot of bug fixes.

  • Fixed many crashes
  • Removed bouncing/scrolling when entering a conversation or sending a message
  • replying from notification can be encrypted
  • joining a group does not use “test”
  • image previews have better iPhone x support

by Anu at August 03, 2019 18:43

July 31, 2019

João Duarte

8th GSoC Week Report

After some fusion dance, we have some stuff working neatly! Last week was all about the lua environment and improvements aimed at user-friendliness, which you can check down below with the latest changesets and demos.

Events

prosodyctl

  • Removed most development/experimental commands
  • Created a script that is automatically setting up a ready to be served repository will rockspecs for all of prosody's plugins (demo)
  • Various improvements/updates to the main commands:
  • prosody uses a separate folder to deal with non-standard plugins - demo
  • Users can configure their own path, and override it at any time from the command line if needed - demo
  • Updated the startup utility script regarding management of default/custom installer directories and lua environment

Difficulties

The greatest obstacles this past week were everything related with the concept of lua environment and its setup, which I wasn't really familiar with. Right now I *think* I have a clearer idea of what I'm doing when messing with that, after some explanations and research. Prosody is now complementing a its process's Lua environment with the required paths.
For people who are new to the concept like me, this is a cool source of info that I've found and you might want to check out.

Future goals

Code right now is a bit too messy after all the turmoil to get things working, and that needs to be improved this week.
There are also some minor issues left regarding rockspecs and pathing that I need to check to make sure are right. Specifically, I'm still having some trouble with modules that have more than one file or more complex structures, and making sure prosody is really finding everything it needs regarding installed plugins and lua libraries, although so far it seems to be okay.
Some first serious steps into documentation should also be made until the next blog. Time is flying, and its August right around the corner. This is GSoC's last month =[

Considerations

Don't know a thing about how to use moon rocks? Well, I made a breathtaking attempt at explaining some basics, which you can check here!

by João Duarte (noreply@blogger.com) at July 31, 2019 02:12

Luarocks tutorial


Prosody's plugin installer is making use of Luarocks to get things working. Therefore, it makes sense that we look a bit into Luarocks to see how it works.

Concepts

Here is what you need to know. Luarocks uses files, called rockspecs, to get things installed around. These are basically instruction files and they tell the program how to do the work. We can pack the sources and the rockspecs into a single file, called a rock. We can install libraries from both rocks and rockspec files. The code is installed in what is called a rock tree, which is a folder with a specific structure inside, organized in a way to help with the required management of libraries and servers.

Installing, removing and list

If you are working on debian you can just use:
  • sudo apt-get install luarocks
Now, there might be some trouble here and there, depending on your machine, or if you are in a different OS, but the guys at luarocks have tips for almost every possible situation, just check here:
Worst case scenario head over to their support section and surely you'll find someone to help you.

Now, with that out of the way, lets (The attentive reader will notice I misspelled "lets" at the beginning of the next video 😅) check how it works! Topics we are covering here:
  • luarocks help
  • configuration section
  • using the global and local trees
  • using install, remove and list

The --tree flag

We can use this flag to specify a custom path for a rock tree, without changing anything at the configuration file.

The --server flag

We can use this flag to specify the place from which we are getting our rocks. We can reference a rock tree, a repository or a server. These terms are used somehow interchangeably, so watch out! Here I'll show you how to use a remote source being served to us.

Writing rockspecs

Okay, so lets keep this simple. There is a number of things to take into account, but all of them lead to the same place. We need to write rockspecs related to the sources that we want to install. We can do this with the write_rockspec command, which will automatically create rockspecs and fill some fields for us. Most of them have to be filled by hand though. Notably, the source.url field must be filled if we want to deal with rocks/rockspecs from a remote server.

The make command

You can make a rockspec work locally with the make command though. The make command is your answer, in those cases where you have everything in your local machine.

Installing from our own repo/server

We need to write down some server in the source.url field. By default, luarocks is pointed to their official repository. But we can use any repository we want.  That's what we did when we used the --server flag, which overrides the default source. The difference is that we were using an already existing repository. What if we want to make our own?
Well, there are a couple of ways, but if you are inexperienced about luarocks and servers like I was/am, be warned, this thing might is the gift that keeps on giving!

But lets look at the easiest way, which turns out to be quite accessible, in my experience. We can make use of a traditional git/mercurial repository to use as a source for our own plugins. After that we'll want to make our rockspecs available at a local/remove server, in order to distribute stuff around. We can make use of the luarocs-admin make-manifest command to do some heavy lifting for us and setup everything we need =)
At the above image we can see the rockspecs available at my local server. If you are unfamiliar about how to get an apache server going, check this out.
And these are the basics about Luarocks. There is plenty more though, so be sure to check their docs if you need.
Cheers!

    by João Duarte (noreply@blogger.com) at July 31, 2019 02:05

    July 25, 2019

    Jérôme Poisson

    Salut à Toi v0.7 « La Commune »

    Salut to you!

    It is with a big pleasure and a sort of relief that I announce you the release of "Salut à Toi" v0.7.0 (La Commune).

    To remind you "Salut à Toi" (or SàT) is an ecosystem of decentralised communication, based on "XMPP" established standard.
    It features many functionalities (instant chatting, file sharing, blogging/microblogging, events organisation, forums, etc.) and has the particularity of being multi-interfaced (different "frontends" are available for the web, desktop, mobile devices or even the terminal and command line).

    The preparation of this version has taken 3 years. This is the first version we can call "general audience", in other words it is approachable also by those without technical background . Though, there are still some improvements to be done in the frontend and "user experience" level.

    I shall not enumerate again all the functionalities with screenshots, you can consult the announcement of the alpha version to do so by yourself. I'd rather explain some major updates, which has been done:

    Cagou, the desktop/mobile interface

    As promised by our modest crowdfunding, Cagou is the name of the new desktop/mobile frontend (Android only at this moment). To share a little story with you, it is a reference to a beautiful bird, which does not fly, but barks, and is endemically present on the no less beautiful island of New-Caledonia. It is also a nod to Kivy, the framework we are using and from which the name and logo may remind you of Kiwi of New-Zeland.

    So, this interface is multi-platforms, not focusing only on instant messaging: you can use it for file-sharing (e.g. the videos/photos between your computer and phone), or as a remote-control for your media player. It is of course planned that it will be able to cope with blogging in the close future.

    Cagou is thought to be usable on a little screen, as well in full-screen mode of desktop, and allows the screen division into zones - to, for instance, follow several discussions at the same time.

    On android, the application has still some problems with the reactivity and bugs time to time. Lot of problems will be corrected by with Python 3 port. Consider this version as the first one to have you commentaries et propositions.

    Cagou sur Android

    End-to-end encryption

    Of course, SàT has been featuring end-to-end encryption for several years already through "OTR", however this version witnesses the arrival of "OMEMO", an algorithm correcting the issues of previous one (specially allowing sending of offline encrypted messages, or to display them on multi-devices). OMEMO is implemented only for simple conversation ("1:1", between 2 people) at the moment, but the next version will surely cope with end-to-end-encrypted conversations within groups.

    Events, photo-albums, forums, ticket-handling, merge-requests

    Numerous new functionalities has appeared in this version. You can now create and manage events (e.g. for the family) with the classic list of invited persons et the replies of type "RSVP" (attending, not attending, maybe attending). It is possible to invite people, even though they don't have XMPP account - using the "invitation" accounts which are automatically generated and sent to their emails.

    You can create and share the photo albums, a specialisation of file-sharing. The possibility to create one is not yet available from graphic interface, however, this is planned to be improved soon. To consult the photos is simple from Libervia (web interface).

    The forum is also in the party, as well as the tickets handling and merge requests. These two last functionalities have been implemented for the project needs, but they are very flexible and can be (in the next version) easily used in a everyday life (e.g. as a shopping lists, to do lists etc.)

    All of this take profit from XMPP "PubSub" capabilities, and can benefit of its permission system (we can imagine the shopping list being shared between the family members indicating who bought what).

    Remark - to actually benefit from all this it is necessary to use the "SàT Pubsub" service, the project made for the needs of Salut à Toi (but which can be used by all XMPP based programs)

    un blog sur Libervia

    album photo sur Libervia

    Web Framework

    The development of Libervia, the web frontend of SàT, has evolved to make it become a web framework. The reason of this evolution is the need of a very flexible interface, one which would allow implementation and testing of new ideas and functionalities easily. The goal is to have a naturally decentralised and federative framework (thanks to XMPP), which integrates simply to the ecosystem. I will speak only briefly about the technical details, but the point is to join Jinja2 with SàT and use PubSub as the database. Also CLI frontend (jp) allows to use the same models to make a static rendering (for instance to generate a static blog or chat archives).

    It is with this framework that these above mentioned features were developed, they are organised in "pages" which are supposed to be simple to use and can work without JavaScript (when possible, it is not the case for the chat). The new official website works thanks to the new framework, you'll find there an introductory documentation.

    Even More

    I won't get too much into details as the updates are too numerous, but though it it worth mentioning that SàT can be also used to store your folders on the server (it can be used as a "component"), or jp, the command line frontend allows manage a lot of things (find or publish an article or blog, send encrypted or non-encrypted message, find someone's avatar, etc.)

    Installation

    Salut à Toi is available on Debian and its derivates, but attention! It is only the case for the backend, the console interface and the command line. By the way, a help to make the Cagou package land there too would be much appreciated.

    It is also available on the Arch Linux's AUR repository, and you'll also find there the development versions.

    The Flatpak packages allows the easy installation of Cagou, Primitivus (terminal) and jp (command line) on most of GNU/Linux distributions, you can find the links on the main page of the site.
    You can of course, use as well pip, the Python packages manager. A simple pip2 install --user sat followed by pip2 install --user cagou and pip2 install --user libervia should be enough. The instructions are available in the site's documentations.

    For Android, you can find a APK on this link (unsigned, a definitive version will follow in a couple of days). After the release, I will make steps to make it available on F-Droid, and eventually the "play store".

    Even though SàT should technically be working on Mac OS X, Windows and *BSD, however on these platforms it is not yet tested (as I do not have any of this devices). I have had some feed-backs concerning Mac use. If you are interested, help with testing and packaging would be nice.

    Future

    The principal planned development of 0.8 is the port to Python 3, which is now finally possible, as no more dependency is blocking, and it is the only thing promised for the next version. Lot of big steps are to follow, and it will be inevitable to make some choices. Don't hesitate to give me opinions/feedback in the commentaries.

    Video conference

    With Jingle already implemented, the video conference has been planned for a long time. It shouldn't be too difficult to be implement it into the web interface (thanks to WebRTC), but certainly would need a lot of work on desktop/Android (to evaluate different options, GSTreamer is the hot candidate at the moment, and integrate with Cagou). This is an important work.

    Improving the file sharing

    SàT already allows advanced file sharing (more than a simple sending of the files to the server), including a server component. It is very possible, that the future developments will continue in this direction.

    share a directory with Cagou

    iOS Version

    It would be technically possible to use Cagou on iPhone. Though, there are several obstacles to cope with - especially the legal ones (Apple store is not compatible with the current licence, AGPL 3+). Lot of time and investments would be needed. This is clearly not the priority, but keep in mind that an iOS version is doable.

    ActivityPub Gateway

    With this version, SàT can be used as a "component", i.e. a server service. This possibility can be used to build an ActivityPub gateway, that would allow 2 ways communication with projects using this protocol. There is already somebody working on a similar gateway for Prosody, , so I will certainly wait to see how it evolves before starting it by myself.

    Chat Evolution

    Though its already functional, the chat can become very complete if we take the time for it. Here, we speak of adding some missing features (like reactions or editing of the last sent message), improving the file handling, copy/pasting of code, end-to-end group encryption, etc.
    All this is on the roadmap, the question is whether this is a priority or not.

    Improvement of User Experience

    This is probably the project which will have the priority, once Python 3 port is done. SàT has already lot of features, but the interfaces need some work to become really user-friendly. I would specially like to work on launch screen and contacts discovery. There are as well lots of "small details " which all together take lots of time, but in total make the user-experience much more pleasant: integration to desktop, file-sharing actions, easier file selection, etc.

    Well, so much for the future steps, all of this would take time, but again - your feedbacks are highly welcomed. There are other ideas to think about, but one has to choose her priorities.

    But…

    Salut à Toi is an huge project with a potential - and it is currently developed by a single person. I have currently only one day a week dedicated to SàT (besides mornings, evenings and nights), the rhythm is very difficult to cope with. In the next months I will seriously study the projects financing possibilities. It will be necessary that I find a way to keep the project on track.

    Useful Links

    • official website: https://salut-a-toi.org
    • documentation: https://salut-a-toi.org/documentation
    • my blog: https://www.goffi.org (I publish there weekly progress notes)

    by goffi at July 25, 2019 14:44

    hrxi

    Eighth week: Standards

    Sometimes, XEPs are really imprecise or even lack information about some interactions. Most of the time, it’s about error handling where it’s not really specified what to do in error cases, as the XEP mostly deals with the “happy path” when everything is working.

    This week, I tried to get SOCKS5 bytestreams (S5B, XEP-0065, XEP-0260) working for file transfers. Most of the stuff simply worked after implementing the S5B transport, since the Jingle module was offering a transport-agnostic bytestream to the outside world, or in this case Jingle file transfers XEP-0234. However, file transfers with Conversations got stuck at 100%. After some debugging, I found out that Conversations doesn’t close the SOCKS5 connection after sending the file. Since I relied on the the peer closing the connection after sending the complete file, this led to Conversations waiting for me to acknowledge the file transfer and me waiting for Conversations to close the connection. I opened Conversations#3500 about this inconsistency.

    Reading into XEP-0234, I tried to find out how to detect the end of the file transfer. The two relevant sections I found were

    Once a file has been successfully received, the recipient MAY send a Jingle session-info message indicating receipt of the complete file, which consists of a <received/> element qualified by the 'urn:xmpp:jingle:apps:file-transfer:5' namespace. The <received/> element SHOULD contain 'creator' and 'name' attributes sufficient to identify the content that was received.

    (8.1 Received)

    and

    Once all file content in the session has been transfered, either party MAY acknowledge receipt of the received files (see Received) or, if there are no other active file transfers, terminate the Jingle session with a Jingle session of <success/>. Preferably, sending the session-terminate is done by the last entity to finish receiving a file to ensure that all offered or requested files by either party have been completely received (up to the advertised sizes).

    (6.2 Ending the Session) (link might die in the future, has a weird anchor).

    I wasn’t able to find the relevant information, so I looked at Conversation’s source code to find out how it determines when the file transfer is complete. Turns out it’s simply checking whether it has already read as many bytes as the integer in the <size> field in the initial file offer. Adding that as a “filetransfer complete” condition, I can now receive files from Conversations. This however, is not a general solution, as the <size> field only SHOULD be present when offering a file, so we can’t rely on it being available when receiving files over Jingle.

    If anyone knows what the correct way to detect a completed file transfer is, please tell me.

    by hrxi at July 25, 2019 00:00

    July 23, 2019

    Monal IM

    New iOS Beta

    There is an iOS beta update that focuses on push reliability. I have made improvements to push and stream resumption and compatibility with Gajim

    by Anu at July 23, 2019 02:15

    July 22, 2019

    João Duarte

    Mid July GSoC Report

    Improving Prosody


    We are past mid-way through GSoC's coding period already! Check out the most recent improvements made through keyboard and mouse in recent times of history.

    Events

    prosodyctl

    Added the following commands:
    • get_modules - development command, downloads all of prosody modules to a folder, downloaded_modules, in the working directory
    • write_rockspec - development command, writes a rockspec from one of the modules at the auxiliary folder and saves it there
    • make - Installs a rockspec. The rockspec name is the only argument it accepts. The file and its sources have to be located at downloaded_modules/module_we_are_installing.
    • install/remove - Installs/removes a rockspec, either at the working directory's plugins path or at a directory specified by the --tree flag. We need to be in the same folder as the rockspec, otherwise it will search for rocks from Luarock's repository.
    • list - Shows a list of installed rocks. If no argument is passed it will look at the working directory's plugin path. Otherwise, it will look at the path specified with the --tree tag
    With these a user can now manage either prosody rocks or luarocks directly with the prosodyctl tool.

    Difficulties

    I've spend a lot of time trying to really understand the concepts of server, repository and directory, and the respective differences in the context of using prosody and luarocks. Albeit I've been delivered some well written explanations around, the concepts weren't really sinking in and I dragged a bit around to push my understanding forward without asking the same questions again, while also trying to put them to use through the current tools. This however is not the main focus of the current project, and ends up being perhaps a unnecessary delay. I find myself particularly vulnerable to these kind of problems, and I'm trying to better deal with them in order to raise performance.

    Future Goals

    The main focus right now is dealing with dependencies and improving the prosodyctl usability to the user. 
    While luarocks considers dependencies related to lua libraries, prosody needs to consider those and also other dependencies related to its modules.
    A nice thing to have would be for the plugin installer to automatically deal with the necessary configurations at prosody's config file. This is an hard problem and would be a neat extra objective to have, but for now the most realistic solution is to provide the user with instructions about when and how should he modify prosody's configuration.

    Considerations

    The functionality to get prosody's modules and write rockspecs isn't necessary for the user, it is really for development convience at this moment. In the future, the tool should allow the user to easily get the modules required from a prosody's server automatically.
    I haven't been able to upload a blog post a week, even though I'd like too. No excuses to offer here, discipline and consistency have to be improved to achieve this goal!
    I've been working on some luarocks and prosodyctl tutorials to show the capabilities of the tools, partially. This isn't the main goal of the program and I have been advised to not spend more time on the blog than on the project itself. I still think these will be useful and hopefully easy for the general public to understand. Hope I can get them out this week, while I remain mostly focused on the main objectives.

    by João Duarte (noreply@blogger.com) at July 22, 2019 14:44