Planet Jabber

May 21, 2022


Gajim 1.4.1

Only a week after the release of Gajim 1.4.0, we’re happy to announce Gajim 1.4.1! 🎉 This release brings several fixes for issues you reported to us. Thanks for your feedback!

What’s New

In order to make it easier to reach us for help, we added a new menu item “Join Support Chat” under “Help”. Clicking it will directly join our support chat at

While redesigning the message window, we moved message timestamps to the right. While this leads to a less cluttered look, it requires users to move their focus from one side to the other. To improve this situation, we moved them to the left, right next to the nickname.

Windows users please note: Windows builds are now based on Python 3.9, which does not run on Windows 7 or older.


Several issues have been fixed in this release.

  • Update unread counter if being mentioned in a public group chat
  • Windows: Fix notification area icon not disappearing when closing Gajim
  • Fix error if opening chat with missing trust data
  • Fix crash when trying to remove chat history
  • Failsafe for missing avatar images
  • Fix Jingle file transfers for drag and drop and pasted images

Have a look at the changelog for the complete list.


As always, don’t hesitate to contact us at or open an issue on our Gitlab.

May 21, 2022 00:00

May 17, 2022


ejabberd 22.05

A new ejabberd release is finally here! ejabberd 22.05 includes five months of work, 200 commits, including many improvements (MQTT, MUC, PubSub, …) and bug fixes.

  • Improved MQTT, MUC, and ConverseJS integration
  • New installers and container
  • Support Erlang/OTP 25

When upgrading from the previous version please notice: there are minor changes in SQL schemas, the included rebar and rebar3 binaries require Erlang/OTP 22 or higher, and make rel uses different paths. There are no breaking changes in configuration, and only one change in commands API.

A more detailed explanation of those topics and other features:

New Indexes in SQL for MUC

Two new indexes were added to optimize MUC. Those indexes can be added in the database before upgrading to 22.05, that will not affect older versions.

To update an existing database, depending on the schema used to create it:

  • MySQL (mysql.sql or
CREATE INDEX i_muc_room_host_created_at ON muc_room(host(75), created_at);
CREATE INDEX i_muc_room_subscribers_jid USING BTREE ON muc_room_subscribers(jid);
  • PostgreSQL (pg.sql or
CREATE INDEX i_muc_room_host_created_at ON muc_room USING btree (host, created_at);
CREATE INDEX i_muc_room_subscribers_jid ON muc_room_subscribers USING btree (jid);
  • SQLite (lite.sql or
CREATE INDEX i_muc_room_host_created_at ON muc_room (host, created_at);
CREATE INDEX i_muc_room_subscribers_jid ON muc_room_subscribers(jid);
  • MS SQL (mssql.sql):
CREATE INDEX [muc_room_host_created_at] ON [muc_registered] (host, nick)
CREATE INDEX [muc_room_subscribers_jid] ON [muc_room_subscribers] (jid);

Fixes in PostgreSQL New Schema

If you moved your PostgreSQL database from old to new schema using mod_admin_update_sql or the update_sql API command, be aware that those methods forgot to perform some updates.

To fix an existing PostgreSQL database schema, apply those changes manually:

ALTER TABLE archive DROP CONSTRAINT i_archive_sh_peer;
ALTER TABLE archive DROP CONSTRAINT i_archive_sh_bare_peer;
CREATE INDEX i_archive_sh_username_peer ON archive USING btree (server_host, username, peer);
CREATE INDEX i_archive_sh_username_bare_peer ON archive USING btree (server_host, username, bare_peer);

DROP TABLE carboncopy;

ALTER TABLE push_session DROP CONSTRAINT i_push_session_susn;
CREATE UNIQUE INDEX i_push_session_susn ON push_session USING btree (server_host, username, service, node);

ALTER TABLE mix_pam DROP CONSTRAINT i_mix_pam_us;
CREATE UNIQUE INDEX i_mix_pam ON mix_pam (username, server_host, channel, service);
CREATE INDEX i_mix_pam_us ON mix_pam (username, server_host);

CREATE UNIQUE INDEX i_route ON route USING btree (domain, server_host, node, pid);

ALTER TABLE mqtt_pub DROP CONSTRAINT i_mqtt_topic;
CREATE UNIQUE INDEX i_mqtt_topic_server ON mqtt_pub (topic, server_host);

API Changes

The oauth_revoke_token API command has changed its returned result. Check oauth_revoke_token documentation.

API Batch Alternatives

If you use the command delete_old_messages periodically and noticed it can bring your system to an undesirable state with high CPU and memory consumption…

Now you can use delete_old_messages_batch, which performs the operation in batches, by setting the number of messages to delete per batch and the desired rate of messages to delete per minute.

Two companion commands are added: delete_old_messages_status to check the status of the batch operation, and abort_delete_old_messages to abort the batch process.

There are also new equivalent commands to delete old MAM messages.

Erlang/OTP and Elixir

From now, Erlang/OTP 25 is supported. As that’s a brand new version, for stable deployments you may prefer to use 24.3 or other lower version.

Notice that ejabberd can be compiled with Erlang as old as 19.3, but the rebar and rebar3 binaries included with ejabberd 22.05 require at least Erlang 22. This means that, to compile ejabberd 22.05 with those tools using an Erlang version between 19.3 and 21.3, you should get yourself a compatible rebar/rebar3 binary. If your operating system doesn’t provide a suitable one, you can download the old ones: rebar from ejabberd 21.12 and rebar3 from ejabberd 21.12.

Regarding Elixir supported versions:

  • Elixir 1.4 or higher is supported for compilation, but:
  • Elixir 1.10 is required to build OTP releases (make rel and make dev)
  • Elixir 1.11 is required to run make relive
  • Elixir lower than 1.11.4 requires Erlang lower than 24 to build OTP releases


mod_conversejs was introduced in ejabberd 21.12 to serve a simple page for the Converse.js XMPP web browser client.

Several improvements in mod_conversejs now allow a simpler configuration, and more customization at the same time:

  • The options now support the @HOST@ keyword
  • The options now support auto, which uses local or remote Converse files
  • The Converse’s auth and register options are set based on ejabberd’s configuration
  • default_domain option now has @HOST@ as default value, not the first defined vhost
  • conversejs_options: New option to setup additional options for Converse
  • conversejs_resources: New option to serve converse.js files (no need to setup an additional web server)

For example, if you downloaded Converse, now you can setup WebSocket, mod_conversejs, and serve Converse without additional web server, in an encrypted port, as simple as:

    port: 443
    module: ejabberd_http
    tls: true
      /websocket: ejabberd_http_ws
      /conversejs: mod_conversejs

    conversejs_resources: "/home/ejabberd/conversejs-9.0.0/package/dist"

With that configuration, Converse is available in https://localhost/conversejs

More details in the mod_conversejs documentation.

New Installers

For many years, the release of a new ejabberd source code package was accompanied with binary installers, built using InstallBuilder and CEAN, and available in the ProcessOne Downloads page.

Since this ejabberd 22.05, there are new installers that use a completely different build method:

  • they are built using the tools provided in PR 3781
  • they use the most recent stable dependencies
  • they are available for linux/amd64 and linux/arm64 architectures
  • they are built automatically using the Installers Workflow
  • for stable releases, they are available for download in the ejabberd GitHub Releases
  • they are built also for every commit in master branch, and available for download in the results of Installers Workflow
  • if the installer is ran by root, it installs in /opt/ejabberd* and setups systemd service
  • if ran by a regular user, it asks installation path

However, compared to the old installers, those new installers:

  • do not ask for domain: now you must edit ejabberd.yml and set the hosts option
  • do not register the first Jabber account and grant admin rights: you must do it yourself

Please give those new installers a try, and comment any problem, improvement or ideas.

New Container Image

In addition to the ejabberd/ecs Docker container image published in Docker Hub, there is a new container image published in ejabberd GitHub Packages.

Its usage is similar to the ejabberd/ecs image, with some benefits and changes worth noting:

  • it’s available for linux/amd64 and linux/arm64 architectures
  • it’s built also for master branch, in addition to the stable ejabberd releases
  • it includes less customizations to the base ejabberd compared to ejabberd/ecs
  • it stores data in /opt/ejabberd/ instead of /home/ejabberd/

See its documentation in CONTAINER.

If you used previous images from that GitHub Packages registry please note: until now they were identical to the ones in Docker Hub, but the new 22.05 image is slightly different: it stores data in /opt/ejabberd/ instead of /home/ejabberd/. You can update the paths to the container volumes in this new image, or switch to Docker Hub to continue using the old same images.

Source Code Package

Until now, the source code package available in the ProcessOne Downloads page was prepared manually together with the binary installers. Now all this is automated in GitHub, and the new source code package is simply the same one available in GitHub Tags.

The differences are:

  • instead of tgz it’s now named tar.gz
  • it contains the .gitignore file
  • it lacks the configure and aclocal.m4 files

The compilation instructions are slightly improved and moved to a separate file:

New make relive

This new make relive is similar to ejabberdctl live, but without requiring to install or build an OTP release: compile and start ejabberd immediately!

Quickly put:

  • Prepare it with: ./ && ./configure --with-rebar=./rebar3 && make
  • Or use this if you installed Elixir: ./ && ./configure --with-rebar=mix && make
  • Start without installing (it recompiles when necessary): make relive
  • It stores config, database and logs in _build/relive/
  • There you can find the well-known script: _build/relive/ejabberdctl
  • In that erlang shell, recompile source code and reload at runtime: ejabberd_admin:update().

Please note, when make relive uses Elixir’s Mix instead of Rebar3, it requires Elixir 1.11.0 or higher.

New GitHub Workflows

As you may notice while reading these release notes, there are new github workflows to build and publish the new installers and the container images, in addition to the Common Tests suite.

The last added workflow is Runtime. The Runtime workflow ensures that ejabberd compiles with Erlang/OTP 19.3 up to 25, using rebar, rebar3 and several Elixir versions. It also checks an OTP release can be built, started, register an account, and stop ejabberd.

See its source code runtime.yml and its results.

If you have troubles compiling ejabberd, check if those results reproduce your problem, and also see the steps used to compile and start ejabberd using Ubuntu.

Translations Updates

The German, Portuguese, Portuguese (Brazil), Spanish and Catalan translations are updated and completed. The French translation was greatly improved and updated too.

Documentation Improvements

Some sections in the ejabberd Documentation are improved:



  • C2S: Don’t expect that socket will be available in c2s_terminated hook
  • Event handling process hook tracing
  • Guard against erlang:system_info(logical_processors) not always returning a number
  • domain_balancing: Allow for specifying type only, without specifying component_number


  • Add TLS certificate authentication for MQTT connections
  • Fix login when generating client id, keep connection record (#3593)
  • Pass property name as expected in mqtt_codec (fixes login using MQTT 5)
  • Support MQTT subscriptions spread over the cluster (#3750)


  • Attach meta field with real jid to mucsub subscription events
  • Handle user removal
  • Stop empty MUC rooms 30 seconds after creation
  • default_room_options: Update options configurable
  • subscribe_room_many_max_users: New option in mod_muc_admin


  • Improved options to support @HOST@ and auto values
  • Set auth and register options based on ejabberd configuration
  • conversejs_options: New option
  • conversejs_resources: New option


  • mod_pubsub: Allow for limiting item_expire value
  • mod_pubsub: Unsubscribe JID on whitelist removal
  • node_pep: Add config-node and multi-items features (#3714)


  • Improve compatibility with various db engine versions
  • Sync old-to-new schema script with reality (#3790)
  • Slight improvement in MSSQL testing support, but not yet complete

Other Modules

  • auth_jwt: Checking if an user is active in SM for a JWT authenticated user (#3795)
  • mod_configure: Implement Get List of Registered/Online Users from XEP-0133
  • mod_host_meta: New module to serve host-meta files, see XEP-0156
  • mod_mam: Store all mucsub notifications not only message notifications
  • mod_ping: Delete ping timer if resource is gone after the ping has been sent
  • mod_ping: Don’t send ping if resource is gone
  • mod_push: Fix notifications for pending sessions (XEP-0198)
  • mod_push: Keep push session ID on session resume
  • mod_shared_roster: Adjust special group cache size
  • mod_shared_roster: Normalize JID on unset_presence (#3752)
  • mod_stun_disco: Fix parsing of IPv6 listeners


  • autoconf: Supported from 2.59 to the new 2.71
  • fast_tls: Update to 1.1.14 to support OpenSSL 3
  • jiffy: Update to 1.1.1 to support Erlang/OTP 25.0-rc1
  • luerl: Update to 1.0.0, now available in
  • lager: This dependency is used only when Erlang is older than 22
  • rebar2: Updated binary to work from Erlang/OTP 22 to 25
  • rebar3: Updated binary to work from Erlang/OTP 22 to 25
  • make update: Fix when used with rebar 3.18


  • mix release: Copy include/ files for ejabberd, deps and otp, in mix.exs
  • rebar3 release: Fix ERTS path in ejabberdctl
  • Set default ejabberd version number when not using git
  • mix.exs: Move some dependencies as optional
  • mix.exs: No need to use Distillery, Elixir has built-in support for OTP releases (#3788)
  • tools/make-binaries: New script for building Linux binaries
  • tools/make-installers: New script for building command line installers


  • New make relive similar to ejabberdctl live without installing
  • ejabberdctl: Fix some warnings detected by ShellCheck
  • ejabberdctl: Mention in the help: etop, ping and started/stopped
  • make rel: Switch to paths: conf/, database/, logs/
  • mix.exs: Add -boot and -boot_var in ejabberdctl instead of adding vm.args
  • tools/ Fix some warnings detected by ShellCheck


  • Accept more types of ejabberdctl commands arguments as JSON-encoded
  • delete_old_mam_messages_batch: New command with rate limit
  • delete_old_messages_batch: New command with rate limit
  • get_room_occupants_number: Don’t request the whole MUC room state (#3684, #1964)
  • get_vcard: Add support for MUC room vCard
  • oauth_revoke_token: Add support to work with all backends
  • room_unused_*: Optimize commands in SQL by reusing created_at
  • rooms_unused_...: Let get_all_rooms handle global argument (#3726)
  • stop|restart: Terminate ejabberd_sm before everything else to ensure sessions closing (#3641)
  • subscribe_room_many: New command


  • Updated Catalan
  • Updated French
  • Updated German
  • Updated Portuguese
  • Updated Portuguese (Brazil)
  • Updated Spanish


  • CI: Publish CT logs and Cover on failure to an external GH Pages repo
  • CI: Test shell scripts using ShellCheck (#3738)
  • Container: New workflow to build and publish containers
  • Installers: Add job to create draft release
  • Installers: New workflow to build binary packages
  • Runtime: New workflow to test compilation, rel, starting and ejabberdctl

Full Changelog

All changes between 21.12 and 22.05

ejabberd 22.05 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are now available in GitHub Release / Tags. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

The Docker image is in Docker Hub, and a new Container image at GitHub Packages.

If you suspect that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 22.05 first appeared on ProcessOne.

by Jérôme Sautret at May 17, 2022 11:53

May 11, 2022


Togethr: Social

Last week we launched a sister product from the same team that brings you JMP: Togethr.  Why are we launching a second product?  Why now?  What does this have to do with the mission of JMP in particular, or the Sopranica project in general?

Togethr is a managed hosting platform for small Fediverse instances.  It is powered by the ActivityPub protocol that powers Mastodon, PeerTube, and so many others.  While there are several social networking solutions that build on XMPP (just like JMP does), and indeed we use one for this blog, we chose to go with something else for Togethr.  Does that mean we don’t have hope for XMPP in the social space?  No, rather it is an admission that the largest network for people to interact with in this way exists on ActivityPub-compatible software, and people need a solution they can use today.

As it grows, Togethr gives us the “skin in the game” motivation to bridge these worlds.  We are not the only ones interested in bridging the XMPP and ActivityPub worlds together, in fact the Libervia project is currently working on a grant to produce a first version of a gateway, that should be generally usable later this year.  We hope to eventually roll out an update that makes every Togethr instance seamlessly be both ActivityPub and XMPP without anyone needing to change their address.

Why not wait until “everything is ready” to go live with XMPP and ActivityPub at the same time?  Well, people need a solution.  Many people fleeing silos or otherwise being attracted to federated social networking find that self-hosting is too complicated, or they just don’t have the time to dedicate to it.  Many of these people end up creating an account on a giant volunteer-run instance, joining yet another silo (albeit a nicely federated one) run by admins they don’t know with financial and mental pressures they cannot understand.

Togethr gives people looking to federate their digital social networking experience full control without requiring systems administration knowledge or time.  Our team not only keeps the instance running, but provides support for users who may not be familiar with the software or the fediverse in general and need help getting everything set up.  However, there is no lock-in and people can easily move to another host or self-hosting at any time.  For example, if someone got an instance and created the user person they would have address just like you would expect on any Fediverse instance.  However, since they control the domain they could move to a different host or self-host, point the domain at the new instance, copy over their data, and no one has to “follow me at my new address”, everything just keeps working.

While we believe that single-user instances are the pinnacle of federation, Togethr does not limit the way people want to use it.  People may have family or friends they want to share posts with, who might not be motivated to join the Fediverse but will accept a personal invitation.  So every Togethr instance allows the customer to invite whoever they would like to join them on the instance, in order to smooth the onboarding for friends and family.  We hope that this can provide an option for people looking to take control over more of their digital life.

by Stephen Paul Weber at May 11, 2022 20:45


Gajim 1.4.0

After more than a year of development, it’s finally time to announce the release of Gajim 1.4.0! 🎉 Gajim 1.4 series comes with a completely redesigned message window and conversation management. Workspaces allow you to organize your chats to keep matters separate where needed. These changes were only possible by touching a lot of Gajim’s code base, and we appreciate all the feedback we got from you.

What’s New

The new Gajim version comes with a completely redesigned main window. This window offers a sidebar containing workspaces, where you can organize all of your chats. Workspaces have been explained in detail last year. Each workspace holds a list of currently opened chats for both 1:1 chats and group chats. This makes it easy for you to keep matters separate. To make things simple and easy to use, we decided to migrate to a single window approach for Gajim. Chats opened via chat list will be displayed right next to it, keeping the window compact.

Gajim’s new main window

Gajim’s new main window

The way Gajim displays messages had not been changed for years. The previous approach had many limitations, but it was hard to replace it. Gajim 1.4 comes with a new approach, where each message is a separate ‘row’, which you can see in the above screenshot. This approach does not only look much cleaner, it also enables us to implement new features in the future, thinking of message reactions, replies, and so on.

For these changes to be implemented, we had to touch and refactor a good part of Gajim’s code base. Please report any issue you find! We appreciate your feedback.

Windows users please note: Windows builds are now based on Python 3.9, which does not run on Windows 7 or older.

More Changes


  • Redesigned Contact Info and Group Chat Info windows
  • Redesigned Group Chat Creation window
  • Full compatibility with XEP-0393 Message Styling
  • Real-time message styling in the chat input box
  • URL Image Preview, Plugin Installer, Syntax highlighting, and AppIndicator plugins have been integrated into Gajim
  • Support for XEP-0425 Message Moderation in group chats
  • Administrators can now define setting overrides


  • Reworked notification system
  • History manager has been replaced by Gajim’s internal search
  • ‘Note to myself’ feature: write messages to your own contact (e.g. to another device)
  • Improved Windows installer
  • Improved contrast for light and dark themes
  • Bookmark management window has been removed (all actions are still available in Gajim’s user interface)
  • XEP-0174 Serverless Messaging via Zeroconf has been removed
  • Client certificate setup has been removed
  • User Mood (XEP-0107) and User Activity (XEP-0108) have been removed


Over 120 issues have been fixed in this release

Have a look at the changelog for the complete list.


As always, don’t hesitate to contact us at or open an issue on our Gitlab.

May 11, 2022 00:00

May 06, 2022

Paul Schaub

Creating an OpenPGP Web-of-Trust Implementation – A Series

I am excited to announce that PGPainless will receive funding by NGI Assure to develop an implementation of the Web-of-Trust specification proposal!

The Web-of-Trust (WoT) serves as an example of a decentralized authentication mechanism for OpenPGP. While there are some existing implementations of the WoT in applications such as GnuPG, their algorithms are often poorly documented. As a result, WoT support in client applications is often missing or inadequate.

This is where the aforementioned specification comes into play. This document strives to provide a well-documented description of how to implement the WoT in an interoperable and comprehensible way. There is already an existing implementation by the Sequoia-PGP project (Neal, the author of the specification is also heavily involved with Sequoia) which can serve as a reference implementation.

Since I imagine implementing the Web-of-Trust isn’t a straight-forward task (even though there is now a specification document), I decided to dedicate a series of blog posts to go along with my efforts. Maybe this helps others implementing it in the future.

What exactly is the Web-of-Trust?

The essential problem with public key infrastructure (PKI) is not to obtain the encryption keys for contacts, but rather verify that the key you have of a contact really is the proper key and not that of an attacker. One straight-forward solution to this is used by every user of the internet every day. If you visit a website on the internet, the web server of the site presents your browser with its TLS certificate. Now the browser has to figure out, if this certificate is trustworthy. It does so by checking if there is a valid trust-path from one of its root certificates to the sites certificate. Your browser comes with a limited set of root-certificates already preinstalled. This set was agreed upon by your browsers/OS vendor at some point. These root certificates are (mostly) managed by corporations who’s business model is to vouch for your servers authenticity. You pay them so that they testify to others that your TLS certificate is legitimate.

In this case, a trust-path is a chain of certifications from the trusted root certificate down to the sites TLS certificate. You can inspect this chain manually, by clicking the lock icon in your browsers task bar (at least on Firefox). Below is a visualization of the TLS certificate chain of this blog’s TLS certificate.

The certificate “ISRG Root X1” belongs to let’s encrypt, a not-for-profit CA that very likely is embedded in your browser already. R3 is an intermediate certificate authority of let’s encrypt. It certified my blogs TLS certificate. Since during the certificate renewal process let’s encrypt made sure that my server controls my domain, it has some degree of confirmation that in fact belongs to me. This step can be called manual identity verification. As a result, it can therefore attest the legitimacy of my TLS certificate to others.

One property of this model is that its centralized. Although there is a number of root certificates (hundreds in fact, check your /etc/ssl/certs/ directory!), it is not trivial to set up your own, let alone get browser/OS vendors to include it in their distributions.

Now lets take a look at the Web-of-Trust instead. The idea that describes the difference between the centralized TLS model and the WoT best, is that people trust people instead of corporations. If Alice trusts and vouches for Bob, and Bob trusts and vouches for Charlie, Alice could transitively trust Charlie. These trust paths can get arbitrarily long and the whole network of trust paths is what we call the Web-of-Trust. Instead of relying on a more-or-less trustworthy certificate authority to attest key authenticity, we gather evidence for the trustworthiness of a key in our social circle.

This model can be applied to corporate environments as well by the way. Let’s say FooBank is using the Web-of-Trust for their encrypted email traffic. FooBanks admin would be tasked with keeping a list of the email addresses of all current employees and their encryption keys. They would then certify these keys by signing them with a company key which is kept secure. These certification signatures are valid as long as the employee is working at the bank. Other employees would in return sign the company key and mark it as trustworthy. Now they can build a trust path from their own key to that of each other current employee. In that sense, the CA model can be seen as a special case of the Web-of-Trust.

The main problem now is to find an algorithm for determining whether a valid trust path exists between our trust-root and the certificate of interest. You might wonder “What is the trust-root? I thought the WoT comes without centralized trust in a single entity?”. And you are right. But we all trust ourselves, don’t we? And we trust ourselves to decide whom to trust. So to realize the WoT, we define that each user has their own “trust-root” certificate, which is a single certificate that certifies “trusted introducers”. This is the start of the trust-path. In case of FooBank, Employee Albert might for example have a personal trust-root certificate that certifies FooBanks CA key, as well as that of Alberts wive Berta. Now Albert can securely message any FooBank employee, as well as his wive, since there are trust-paths available from his trust-root to those contacts.

Luckily, the problem of finding an algorithm to determine trust-paths is already solved by the Web-of-Trust specification. All that’s left to do is to understand and implement it. That cannot be that hard, can it?

To be continued…

by vanitasvitae at May 06, 2022 10:18

May 05, 2022

The XMPP Standards Foundation

The XMPP Newsletter April 2022

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of April 2022.

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, especially throughout the current situation, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

Newsletter translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

XSF Announcements

XSF and Google Summer of Code 2022

  • The XSF has been accepted as hosting organization at Google Summer of Code 2022 (GSoC).
  • XMPP Newsletter via mail: We migrated to our own server for a mailing list in order to move away from Tinyletter. It is a read-only list on which, once you subscribe to it, you will receive the XMPP Newsletter on a monthly basis. We moved away from Tinyletter due to privacy concerns.
  • By the way, have you checked our nice XMPP RFC page? :-)

XSF fiscal hosting projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects:

XMPP Community Projects

A new community space for XMPP related projects and individuals has been created in the Fediverse! Join us on our new Lemmy instance and chat about all XMPP things!


Are you looking for an XMPP provider that suits you? There is a new website based on the data of XMPP Providers. XMPP Providers has a curated list of providers and tools for filtering and creating badges for them. The machine-readable list of providers can be integrated in XMPP clients to simplify the registration. You can help by improving your website (as a provider), by automating the manual tasks (as a developer), and by adding new providers to the list (as an interested contributor). Read the first blog post!

XMPP Providers



Thilo Molitor presented their new even more privacy friendly push design in Monal at the Berlin XMPP Meetup!

Monal push


The Mellium Dev Communiqué for April 2022 has been released and can be found over on Open Collective.

Maxime “pep.” Buquet wrote some thoughts regarding “Deal on Digital Markets Act: EU rules to ensure fair competition and more choice for users” in his Interoperability in a “Big Tech” world article. In a later article he describes part of his threat model, detailing how XMPP comes into play and proposing ways it could be improved.

German “Freie Messenger” shares some thoughts on interoperability and the Digital Markets Act (DMA). They also offer a comparison of “XMPP/Matrix”

Software news

Clients and applications

BeagleIM 5.2 and SiskinIM 7.2 just got released with fixes for OMEMO encrypted message in MUC channels, MUC participants disappearing randomly, and issues with VoIP call sending an incorrect payload during call negotiation.

converse.js published version 9.1.0. It comes with a new dark theme, several improvements for encryption (OMEMO), an improved stanza timeout, font icons, updated translations, and enhancements of the IndexedDB. Find more in the release notes.

Gajim Development News: This month came with a lot of preparations for the release of Gajim 1.4 🚀 Gajim’s release pipeline has been improved in many ways, allowing us to make releases more frequently. Furthermore, April brought improvements for file previews on Windows.

Go-sendxmpp version v0.4.0 with experimental Ox (OpenPGP for XMPP) support has been released.

JMP offers international call rates based on a computing trie. There are also new commands and team members.

Monal 5.1 has been released. This release brings OMEMO support in private group chats, communication notifications on iOS 15, and many improvements.

PravApp project is a plan to get a lot of people from India to invest small amounts to run an interoperable XMPP-based messaging service that is easier to join and discover contacts, similar to the Quicksy app. Prav will be Free Software, which respects users' freedom. The service will be backed by a cooperative society in India to ensure democratic decision making in which users can take part as well. Users will control the privacy policy of the service.

Psi+ 1.5.1619 (2022-04-09) has been released.

Poezio 0.14 has been released alongside with multiple backend libraries. This new release brings in lots of bug fixes and small improvements. Big changes are coming, read more in the article.

Poezio Stickers

Profanity 0.12.1 has been released, which brings some bug fixes.

UWPX ships two small pre-release updates concering a critical fix for a crash that occurs when trying to render an invalid user avatar and issues with the Windows Store builds. Besides that it also got a minor UI update this month.


Ignite Realtime Community:

  • Version 9.1.0 release 1 of the Openfire inVerse plugin has been released, which enables deployment of the third-party Converse client 37 in Openfire.
  • Version 4.4.0 release 1 of the Openfire JSXC plugin has been released, which enables deployment the third-party JSXC client 13 in Openfire.
  • Version 1.2.3 of the Openfire Message of the Day plugin has been released, and it ships with German translations for the admin console
  • Version 1.8.0 of the Openfire REST API plugin has been released, which adds new endpoints for readiness, liveliness and cluster status.


slixmpp 1.8.2 has been released. It fixes RFC3920 sessions, improves certificate errors handling, and adds a plugin for XEP-0454 (OMEMO media sharing).

The library v0.21.2 has been released! Highlights include support for PEP Native Bookmarks, and entity capabilities. For more information, see the release announcement.

Extensions and specifications

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

By the way, features a new page about XMPP RFCs.


The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.


  • No new XEPs this month.


If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.


  • Version 0.4 of XEP-0356 (Privileged Entity)
    • Add “iq” privilege (necessary to implement XEPs such as Pubsub Account Management (XEP-0376)).
    • Roster pushes are now transmitted to privileged entity with “roster” permission of “get” or “both”. This can be disabled.
    • Reformulate to specify than only initial stanza and “unavailable” stanzas are transmitted with “presence” pemission.
    • Namespace bump. (jp)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.


  • No XEPs advanced to Stable this month.


  • No XEP deprecated this month.

Call for Experience

A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

  • No Call for Experience this month.

Spread the news!

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Therefore, we would like to thank Adrien Bourmault (neox), anubis, Anoxinon e.V., Benoît Sibaud, cpm, daimonduff, emus, Ludovic Bocquet, Licaon_Kter, mathieui, MattJ, nicfab, Pierre Jarillon, Ppjet6, Sam Whited, singpolyma, TheCoffeMaker, wurstsalat, Zash for their support and help in creation, review, translation and deployment. Many thanks to all contributors and their continuous support!

Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations


This newsletter is published under CC BY-SA license.

May 05, 2022 00:00

April 30, 2022


Development News April 2022

This month came with a lot of preparations for the release of Gajim 1.4 🚀 Gajim’s release pipeline has been improved in many ways, allowing us to make releases more frequently. Furthermore, April brought improvements for file previews on Windows.

Changes in Gajim

For two and a half years I (wurstsalat) have been writing (and translating) Gajim’s monthly development news. Keeping this up on a monthly basis takes a lot of time and effort. Upcoming development news will be released on an irregular basis, focussing on features instead of monthly progress.

It has been a while since the release of Gajim 1.3.3. But why does it take so long until a new version gets released? One of the reasons is the amount of manual work it takes to update every part of Gajim’s internals for a new release. This does not include functional changes, but only things which need to be updated (version strings, translations, changelogs, etc.) before a new version can be deployed. Note that Gajim is available for multiple distributions on Linux, for Flatpak, and for Windows, which makes releasing a new version more complicated. In order to make releases happen more frequently, i.e. reducing the manual work involved in deploying a new version, great efforts have been made:

  • deployment pipelines have been established on Gajim’s Gitlab
  • the process of applying Weblate translations has been integrated better
  • changelogs will be generated automatically from git’s commit history
  • Flatpak update process has been simplified

There are more improvements to come, but this should already make deploying a new version much easier.

What else happened:

  • Sentry integration has been improved
  • libappindicator is now used on Wayland, if available
  • downloading a file preview can now be cancelled
  • mime type guessing for file previews has been improved on Windows
  • audio previews are now available on Windows
  • Security Labels (XEP-0258) selector has been improved
  • improvements for private chat messages

Plugin updates

Gajim’s OpenPGP plugin received an update with some usability improvements.

Changes in python-nbxmpp

python-nbxmpp is now ready for being deployed quickly as well.

As always, feel free to join to discuss with us.


April 30, 2022 00:00

April 27, 2022


Newsletter: New Staff, New Commands

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

The JMP team is growing.  This month we added root, whom many of you will know from the chatroom.  root has been a valuable and helpful member of the community for quite some time, and we are pleased to add them to the team.  They will be primarily helping with support and documentation, but also with, let’s face it, everything else.

The account settings bot has a new command for listing recent financial transactions.  You can use this command to check on your auto top-ups, recent charges for phone calls, rewards for referrals, etc.  There is now also a command for changing your Jabber ID, so if you find yourself in a situation where you are changing for any reason you can do that yourself without waiting for support to do it manually.

This month also saw the release of Cheogram Android 2.10.5-2.  This version has numerous bug fixes for crashes and other edge cases and is based on the latest upstream code which includes a security fix, so be sure to update!  Support for TOR and extended connection settings has also been fixed, a new darker theme added, and UI tweaks to recognize that messages are often encrypted with TLS.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at April 27, 2022 21:00

April 25, 2022

Erlang Solutions

April 21, 2022

Erlang Solutions

What are the key trends in digital payments? part 1/2

Payments are the backbone of a functioning global economy. A payments system can be defined as any system that can be used to settle a financial transaction by exchanging monetary value. Payments are a part of financial services that have undergone rapid and transformational change over recent years, and the Erlang Solutions team has been at the cutting-edge of many of these changes working on exciting client projects with global card payments companies to fintech startups.

In this two part article, we take a look at some of the main drivers of change to the way payments work and to the broader payments ecosystem using our fintech industry knowledge and experience working on some of the most performant fintech systems in the world such as Vocalink’s Instant Payments Solution (IPS).

If you are involved in the payments industry in the Nordics region of Europe, make sure to catch up with our Nordics Managing Director, Erik Schön, who will be taking part in a panel discussion at NextGen Nordics taking place in Stockholm, Sweden, on 27 April. 

The digital payments landscape

Evolving customer expectations alongside technological advances are driving innovation that prioritises speed, near to real-time payments, frictionless transactions and decentralised models. Also, compounded by the pandemic, significant growth of digital commerce has led to record payment volumes in most markets. Combined, these factors make payments one of the most interesting areas of financial services. There are opportunities for innovative fintechs to provide better client experiences, traditional players to expand their services and tech enablers to offer alternative ways for transactions to flow.

With market competition driving fee decreases, it is a challenge for traditional players to maintain the same levels of profitability while using existing payment infrastructure. We have seen some of our fintech clients launch into the payments ecosystem offering a more diverse range of services, and traditional payments companies are responding by leveraging the huge amounts of data at their disposal to guide a strategy of adding to their offering. These new services are in areas including loyalty, tailored offers, data insights, risk management and more.

At the heart of payments, these themes have been massively accelerated over the pandemic years. You can take a deeper dive into this and other emerging tech trends in financial services by downloading our Fintech Trends in 2022 white paper. 

A cashless world

Consumers’ shift to digital channels is driving demand for seamless fulfilment and instant gratification. A recent Capgemini World Payments Report survey found an increase from 24% to 46% in respondents who had e-commerce accounting for more than half of their monthly spending when comparing before the pandemic to now.

With 91% of the global population expected to own a smartphone by 2026, according to Statista[1], these customers are unlikely to return to the way things were done before, having experienced the efficiencies offered by digital payments, 

Nayapay, one of our clients in the South Asian market using our MongooseIM chat engine, is an example of a new player in the payments space seizing the opportunity to disrupt local markets. Their chat-based payments app targets the unbanked in Pakistan and is built around fusing the penetration of smartphone usage and people’s willingness to integrate transactions into their daily social, digital activities.

payments techCashless payments are becoming the norm

Demand for faster payments

Demand for instant transactions is driving change in cross-border payments, international remittances and e-commerce. Previously, mirroring the instantaneousness of a cash transaction via electronic means had been an ongoing technical challenge. Now, the introduction of real-time clearing and settlement facilities in many markets makes processing payments almost instantly possible. At the beginning of 2020, The Clearing House’s Real-Time Payments System had 19 large institutions participating. Today, it has 114 banks and credit unions as direct members.

Frustration with the latency and cost of the traditional banking model has led to the emergence of alternative options. Innovative solutions such as the P27 initiative in the Nordic region show how fintech can blend with conventional systems to provide better payments infrastructure for all. P27, named from the 27 million citizens in the Nordic region, aims to integrate the payments of four countries and currencies into a single immediate payment system, again using the Vocalink IPS.

Growth in embedded payments

Embedded finance is changing the way payments are made where financial products are added to the transactional flow in non-financial platforms. With consumers demanding ever more convenient, frictionless ways to make payments using various devices from wallets to wearables, embedded or contextual payment options add convenience and speed to the payments process.

On the merchant side of things, embedded finance helps them to better understand the best payment terms to offer customers, provide seamless checkout, request payment, and offer financing such as buy now pay later (BNPL), all within a single customer experience.

Aside from BNPL, other financial products (like lending and card-issuing) are also moving into contextual environments. Major banks can extend their reach to millions of new users through Banking-as-a-Service (BaaS) APIs to technology businesses and platforms outside of the financial services industry.

Stay tuned…

To make sure you don’t miss the second part of this look at modern payments where we examine what these trends mean for strategy setting by industry players and what the future might look like, you should sign up to our Fintech Matters mailing list.

Sign up here >>

If you want to start a conversation about engaging us for your fintech project or talk about partnering and collaboration opportunities, please send our Fintech Lead, Michael Jaiyeola, an email or connect with him via Linkedin.

The post What are the key trends in digital payments? part 1/2 appeared first on Erlang Solutions.

by Michael Jaiyeola at April 21, 2022 15:08

Understanding Processes for Elixir Developers

This post is for all developers who want to try Elixir or are trying their first steps in Elixir. This content is aimed at those who already have previous experience with the language. 

This will help to explain one of the most important concepts in the BEAM: processes. Although Elixir is a general-purpose programming language, you don’t need to understand how the virtual machine works, but if you want to take advantage of all the features, this will help you to understand a few things about how to design programs in the Erlang ecosystem.

Why understand processes?

Scalability, fault-tolerance, concurrent design, distributed systems – the list of things that Elixir famously make easy is long and continues to grow. All these features come from the Erlang Virtual Machine, the BEAM. Elixir is a general-purpose programming language. You can learn the basics of this functional programming language without understanding processes, especially if you come from another paradigm. But, understanding processes is an excellent way to grow your understanding of what makes Elixir so powerful and how to harness it for yourself because it represents one of the key concepts from the BEAM world.

What is a process in the BEAM world

A process is an isolated entity where code execution happens.

Processes are everywhere in an Erlang system, note the iex, the observer and the OTP patterns are examples of how a process looks. They are lightweight, allowing us to run concurrent programs, and build distributed and fault-tolerant designs. There are a wide variety of other processes using the Erlang observer application. Because of how the BEAM runs, you can run a program inside a module without ever knowing the execution is a process. 

Most Elixir users would have at least heard of OTP. OTP allows you to use all the capabilities of the BEAM, and understanding what happens behind the scenes will help you to take full advantage of abstractions when designing processes.

A process is an isolated entity where code execution happens and they are the base to design software systems taking advantage of the BEAM features.   

You can understand what a process is as just a block of memory where you’re going to store data and manipulate it. A process is a built-in memory with the following parts:

  • Stack: To keep local variables.
  • Heap: To keep larger structures.
  • Mailbox: Store messages sent from other processes.
  • Process Control Block: Keeps track of the state of the process.

Process lifecycle

We could define the process lifecycle for now as:

  1. Process creation
  2. Code execution
  3. Process termination 

Process Creation

The function spawn helps us to create a new process. We will need to provide a function to be executed inside it, this will return the process identifier PID. Elixir has a module called Process to provide functions to inspect a process. 

Let’s look at the following example using the iex:

  1. Create an anonymous function to print a string. (The function self/0 gives the process identifier).
  2. PROCESS CREATION: Invoke the function spawn with the function created as param. This will create a new process.
  3. CODE EXECUTION: The process created will execute the function provided immediately. You should see the message printed.
  4. TERMINATING: After the code execution, the process will terminate. You can check this using the function Process.alive?/1, if the process is still alive, you should get a true value.
iex(1)> execute_fun = fn -> IO.puts "Hi! ⭐  I'm the process #{inspect(self())}." end
#Function<45.65746770/0 in :erl_eval.expr/5>

iex(2)> pid = spawn(execute_fun)
Hi! ⭐  I'm the process #PID<0.234.0>.

iex(2)> Process.alive?(pid) 

Receiving messages

A process is an entity that executes code inside but also can receive and process messages from other processes (they can communicate to each other only by messages). Sometimes this might be confusing, but these are different and complementary parts. 

The receive statement helps us to process the messages stored in the mailbox. You can send messages using the send statement, but the process will only store them in the mailbox. To start to process them, you should implement the receive statement, this will put the process into a mode that waits for sent messages to arrive. 

  1. Let’s create a new module with the function waiting_for_receive_messages/0 to implement the receive statement.
  2. PROCESS CREATION: Spawn a new process with the module created. This will execute the function with the receive statement. 
  3. CODE EXECUTION: Since the function provided has the receive statement, this will put the process into a waiting mode, so the process will be alive until processing a message. We can verify this using the Process.alive?/1.
  4. RECEIVE A MESSAGE: We can send a new message to this process using the function send/2. Remember the

TERMINATING: Once the message has been processed, our process will die.

iex(1)> defmodule MyProcess do
...(1)>   def awaiting_for_receive_messages do
...(1)>     IO.puts "Process #{inspect(self())}, waiting to process a message!"
...(1)>     receive do
...(1)>       "Hi" ->
...(1)>         IO.puts "Hi from me"
...(1)>       "Bye" ->
...(1)>         IO.puts "Bye, bye from me"
...(1)>       _ ->
...(1)>         IO.puts "Processing something"
...(1)>     end
...(1)>     IO.puts "Process #{inspect(self())}, message processed. Terminating..."
...(1)>   end
...(1)> end

iex(2)> pid = spawn(MyProcess, :awaiting_for_receive_messages, [])
Process #PID<0.125.0>, waiting to process a message!

iex(3)> Process.alive?(pid)

iex(4)> send(pid, "Hi")
Hi from me
Process #PID<0.125.0>, message processed. Terminating...

iex(5)> Process.alive?(pid)

Keeping the process alive

One option is to enable the process to run and process messages from the mailbox. Remember how the receive/0 statement works: we could just call this statement after processing a message to make a continuous cycle and prevent termination.

  1. Modify our module to call the same function after processing a message.
  2. PROCESS CREATION: Spawn a new process.
  3. RECEIVE A MESSAGE: Send a new message to be executed in the process created. After this, we’ll call the same function that invokes the receive statement to wait to process another message. This will prevent the process termination. 
  4. CODE EXECUTION: Use the Process.alive?/1 and verify that our process is alive.
iex(1)> defmodule MyProcess do
...(1)>   def awaiting_for_receive_messages do
...(1)>     IO.puts "Process #{inspect(self())}, waiting to process a message!"
...(1)>     receive do
...(1)>       "Hi" ->
...(1)>         IO.puts "Hi from me"
...(1)>         awaiting_for_receive_messages()
...(1)>       "Bye" ->
...(1)>         IO.puts "Bye, bye from me"
...(1)>         awaiting_for_receive_messages()
...(1)>       _ ->
...(1)>         IO.puts "Processing something"
...(1)>         awaiting_for_receive_messages()
...(1)>     end
...(1)>   end
...(1)> end


iex(3)> pid = spawn(MyProcess, :awaiting_for_receive_messages, [])
Process #PID<0.127.0>, waiting to process a message!

iex(4)> Process.alive?(pid)

iex(5)> send(pid, "Hi")
Hi from me
Process #PID<0.127.0>, waiting to process a message!

iex(6)> Process.alive?(pid)

Hold the state

Up to this point we have understood how to create a new process, process messages from the mailbox and how to keep it alive. The mailbox is an important part of the process where you can store messages, but we have other parts of memory that allow us to keep an internal state. Let’s see how to hold an internal state.

  1. Let’s modify our module to receive a list to store all the messages received. We just need to call the same function and send the list updated as param. We’ll print the list of messages as well.
  2. PROCESS CREATION: Spawn a new process. We’ll send an empty list as param to store the messages sent. 
  3. RECEIVE A MESSAGE: Send a new message to be executed in the process created. After receiving the message, we’ll call the same function with the list of messages updated as an argument. This will prevent the process termination, and will update the internal state. 
  4. RECEIVE A MESSAGE: Send another message, and see the output. You should get the list of the messages processed. 

CODE EXECUTION: Use the Process.alive?/1 and verify that our process is alive.

defmodule MyProcess do
  def awaiting_for_receive_messages(messages_received \\ []) do
    receive do
      "Hi" = msg ->
        IO.puts "Hi from me"
        |> IO.inspect(label: "MESSAGES RECEIVED: ")
        |> awaiting_for_receive_messages()

      "Bye" = msg ->
        IO.puts "Bye, bye from me"
        |> IO.inspect(label: "MESSAGES RECEIVED: ")
        |> awaiting_for_receive_messages()

      msg ->
        IO.puts "Processing something"
        |> IO.inspect(label: "MESSAGES RECEIVED: ")
        |> awaiting_for_receive_messages()

iex(3)> pid = spawn(MyProcess, :awaiting_for_receive_messages, [])

iex(4)> Process.alive?(pid)

iex(5)> send(pid, "Hi")
Hi from me

iex(6)> send(pid, "Bye")
Bye, bye from me

iex(7)> send(pid, "Heeeey!")
Processing something
MESSAGES RECEIVED: : ["Heeeey!", "Bye", "Hi"]

How to integrate Elixir reasoning in your processes

Well done! I hope all of these examples and explanations were enough to illustrate what a process is. It’s important to keep in mind the anatomy and the life cycle,to understand what’s happening behind the scenes. 

You can design with the process, or with OTP abstractions. But the concepts behind this are the same, let’s look at an example with Phoenix Live View:

defmodule DemoWeb.ClockLive do
  use DemoWeb, :live_view

  def render(assigns) do
      <h2>It's <%= NimbleStrftime.format(@date, "%H:%M:%S") %></h2>
      <%= live_render(@socket, DemoWeb.ImageLive, id: "image") %>

  def mount(_params, _session, socket) do
    if connected?(socket), do: Process.send_after(self(), :tick, 1000)

    {:ok, put_date(socket)}

  def handle_info(:tick, socket) do
    Process.send_after(self(), :tick, 1000)
    {:noreply, put_date(socket)}

  def handle_event("nav", _path, socket) do
    {:noreply, socket}

  defp put_date(socket) do
    assign(socket, date: NaiveDateTime.local_now())

While the functions render/1 and mount/3 allow you to set up the Live View, the functions handle_info/2 and handle_event/3  are updating the socket, which is an internal state. Does this sound familiar to you? This is a process! A live view is an OTP abstraction to create a process behind the scenes, and of course this contains other implementations. For this particular case the essence of the process is present when the Live View reloads the HTML, keeps all the variables inside the state, and handles all the interactions while modifying it. 

Understanding processes gives you the concepts to understand how the BEAM works and to learn how to design better programs. Many of the libraries written in Elixir or the OTP abstractions these concepts as well, so next time you use one of these in your projects, think about these explanations to better understand what’s happening under the hood. 

Thanks for reading this. If you’d like to learn more about Elixir check out our training schedule or join us at ElixirConf EU 2022.

About the author

Carlo Gilmar is a software developer at Erlang Solutions based in Mexico City. He started his journey as a developer at Making Devs, he’s the founder of Visual Partner-Ship, a creative studio to mix technology and visual thinking. 

The post Understanding Processes for Elixir Developers appeared first on Erlang Solutions.

by Carlo Gilmar at April 21, 2022 10:18

April 14, 2022

Erlang Solutions

Introducing Stream Support In RabbitMQ

In July 2021, streams were introduced to RabbitMQ, utilizing a new blazingly-fast protocol that can be used alongside AMQP 0.9.1. Streams offer an easier way to solve a number of problems in RabbitMQ, including large fan-outs, replay & time travel, and large logs, all with very high throughput (1 million messages per second on a 3-node cluster). Arnaud Cogoluègnes, Staff Engineer @ VMware introduced streams and how they are best used.

This talk was recorded at the RabbitMQ Summit 2021. The 4th edition of RabbitMQ Summit is taking place as a hybrid event, both in-person (CodeNode venue in London) and virtual, on 16th September 2022 and brings together some of the world’s biggest companies, using RabbitMQ, all in one place. 

Streams: A New Type of Data Structure in RabbitMQ

Streams are a new data structure in RabbitMQ that open up a world of possibilities for new use cases. They model an append-only log, which is a big change from traditional RabbitMQ queues, as they have non-destructive consumer semantics. This means that when you read messages from a Stream, you don’t remove them, whereas, with queues, when a message is read from a queue, it is destroyed. This re-readable behaviour of RabbitMQ Streams is facilitated by the append-only log structure.

RabbitMQ also introduced a new protocol, the Stream protocol, which allows much faster message flow, however, you can access Streams through the traditional AMQP 0.9.1 protocol as well, which remains the most used protocol in RabbitMQ. They are also accessible through the other protocols that RabbitMQ supports, such as MQTT and STOMP.  

Streams strong points

Streams have unique strengths that allow them to shine for some use cases. These include:

Large fan-outs

When you have multiple applications in your system needing to read the same messages, you have a fan-out architecture. Streams are great for fan-outs, thanks to their non-destructive consuming semantics, removing the need to copy the message inside RabbitMQ as many times as there are consumers.


Streams also offer replay and time-travelling capabilities. Consumers can attach anywhere in a stream, using an absolute offset or a timestamp, and they can read and re-read the same data as many times as needed.


Thanks to the new stream protocol, streams have the potential to be significantly faster than traditional queues. If you want high throughput or you are working with large messages, streams can often be a suitable option.

Large messages

Streams are also good for large logs. Messages in streams are always persistent on the file system, and the messages don’t stay in the memory for long. Upon consumption, the operating system’s file cache is used to allow for fast message flow.

The Log Abstraction

A stream is immutable, you are able to add messages, but once a message has entered the stream, it cannot be removed. This makes the log abstraction of the stream quite a simple data structure compared to queues where messages are always added, and removed. This brings us to another important concept, the offset. The offset is just a technical index of a message inside the stream, or a timestamp. Consumers can instruct RabbitMQ to start reading from an offset instead of the beginning of the stream. This allows for easy time-travelling and replaying of messages. Consumers can also push the offset tracking responsibility to RabbitMQ.

We can have any number of consumers on a stream, they don’t compete with each other, one consuming application will not steal messages from the other applications, and the same application can read the stream of messages many times.

Queues can store messages in memory or on disk, they can be only on one node or mirrored, Streams are persistent and replicated at all times. When we create a stream, it’s going to have a leader located on one node, and replicas on other nodes. The replicas will follow the leader and synchronize the data. The leader is the only one that can create write operations and the replicas will only be used to serve consumers.

RabbitMQ Queues vs. Streams

Streams are here to complement queues and to expand the use cases for RabbitMQ. Traditional queues are still the best tool for the most common use-cases in RabbitMQ, but they do have their limitations, there are times when they are not the best fit. 

Streams are, similarly to queues, a FIFO data structure, i.e.the oldest message published will be read first. Providing an offset lets the client skip the beginning of the stream but the messages will be read in the order of publishing.

In RabbitMQ you have a traditional queue with a couple of messages and a consuming application. After registering the consumer, the broker will start dispatching messages to the client, and the application can start processing. 

When, at this point, the message is at an important point in its lifetime, it’s present on the sender’s side, and also on the consuming side. The broker still needs to care about the message because it can be rejected and it must know that it hasn’t been acknowledged yet. After the application finishes processing the message, it can acknowledge it and after this point the broker can get rid of the message and consider it processed. This is what we can call destructive consumption, and it is the behaviour of Classic and Quorum Queues. When using Streams, the message stays in the Stream as long as the retention policy allows for it.

Implementing massive fan-out setups with RabbitMQ was not optimal before Streams. You have a message come in, they go to an exchange, and they are routed to a queue. If you want another application to process the messages, you need to create a new queue, bind the queue to the exchange, and start consuming. This process creates a copy of the message for each application, and if you need yet another application to process the same messages, you need to repeat the process; so yet another queue, a new binding, new consumer, and a new copy of the message.

This method works, and it’s been used for years, but it doesn’t scale elegantly when you have many consumer applications. Streams provide a better way to implement this, as the messages can be read by each consumer separately, in order, from the Stream

RabbitMQ Stream Throughput using AMQP and the Stream Protocol

As explained in the talk, there was higher throughput with Streams compared to Quorum  Queues. They got about 40,000 messages per second with Quorum Queues and 64,000 messages per second with Streams. This is because Streams are a simpler data structure than Quorum Queues, they don’t have to deal with complicated things like message acknowledgment, rejected messages, or requeuing.

Quorum Queues still are state-of-the-art replicated and persistent queues, Streams are for other use cases. When using the dedicated Stream protocol, throughputs of one million messages per second are achievable.

The Stream Protocol has been designed with performance in mind and it utilises low level techniques such as the sendfile libC API, OS page cache, and batching which makes it faster than AMQP Queues. 

RabbitMQ Stream Plugin and Clients

Streams are available through a new plugin in the core distribution. When turned on, RabbitMQ will start listening on a new port which can be used by clients understanding the  Stream Protocol. It’s integrated with the existing infrastructure that is present in RabbitMQ, such as the management UI, the REST API, Prometheus.

There is a dedicated stream Java and Go client using this new stream protocol. The Java client is the reference implementation. A tool for performance testing is also available. Clients for other languages are also actively worked on by the community and the core team.

The stream protocol is a bit simpler than AMQP; there’s no routing; you just publish to a stream, there’s no exchange involved, and you consume from a stream just like from a queue. No logic is needed to decide where the message should be routed. When you publish a message from your client applications, it goes to the network, then almost directly to storage. 

There is excellent interoperability between streams and the rest of RabbitMQ. The messages can be consumed from an AMQP 0.9.1 client application and it also works the other way around. 

Example Use Case for Interoperability

Queues and Streams live in the same namespace in RabbitMQ, therefore you can specify the name of the Stream you want to consume from using the usual AMQP clients and by using the x-stream-offset parameter for basicConsume.

It’s very easy to publish with AMQP clients because it’s the same as with Queues, you publish to an exchange. 

Above is an example of how you can imagine using streams. You have a publisher publishing messages to an exchange, depending on the routing key of your messages, the message routed to different queues. So you have a queue for each region in the world. For example, you have a queue for Americas, you have a queue for Europe, you have a queue for Asia, and one for HQ, you have a dedicated consuming application that will do some specific processing for the region.

If you upgrade to RabbitMQ 3.9 or later, you can just create a stream, bind it to the exchange with a wild card so that all messages are still routed to the queues, but the stream gets all the messages. Then you can point an application using the Stream Protocol to this stream, and we can imagine that this application will do some worldwide analytics every day without even reading the stream very quickly. This is how we can imagine streams can fit into existing applications. 

Guarantees for RabbitMQ Streams

Streams support at-least-once delivery, as they support a similar mechanism to AMQP Publish Confirms. There’s also a deduplication mechanism, the broker filtering out duplicate messages based on the publishing sequence number, such as a key in a database, or line number in a file. 

On both sides, we have flow control, so fast publishers’ TCP connections will be blocked. The broker will only send messages to the client when it is ready to accept them.


Streams are a new replicated and persistent data structure in RabbitMQ, they model an append-only log. They are good for large fan-outs, support replay and time-travelling features, and they’re good for high throughput scenarios, and for large logs. They store their data on the file system and never in memory. 

If you think Streams or RabbitMQ could be useful to you but don’t know where to start, talk to our experts, we’re always happy to help. If you want to see the latest features and case studies from the world of RabbitMQ, join us at RabbitMQ Summit 2022. 

The post Introducing Stream Support In RabbitMQ appeared first on Erlang Solutions.

by Erlang Admin at April 14, 2022 08:02

April 13, 2022

Maxime Buquet

An overview of my threat model

I was interested in knowing what kind of threat model people had when using XMPP, so I asked on the newly created XMPP-related community forum – which uses Lemmy! A decentralized alternative to Reddit using Activity Pub. I had an idea for myself, but I didn’t realize it was going to be this long an answer. So I decided to write it down here instead. I’ll be posting the link there.

Building up a threat model is identifying what and/or whom you are trying to protect against. This allows you to take steps to ensure you are actually being protected against what you think you want to protect against. A threat model is to be refined, improved, etc.

I have two main use-cases and I’ll go through one of them, the other one being less involved, even though definitely influenced by this one. This is surely incomplete but it should give a pretty good overview still.

I started doing some activism the past years and I’ve had to adapt regarding communications. It seems not many people in these groups are aware of the amount of information that’s recoverable by an attacker. I was surprised how very little security culture there was, even though I wasn’t doing much of it myself before (because I didn’t think I needed it, really). As you may have guessed, this concerns a lot more than just instant messaging but this is what this article focuses on.

The threat model

For this use-case, I want it to make it hard for anybody to trace my actions back to my civil identity and those of my friends. While I know this is never going to be perfect, and the attacker here has way more resources than we have, we do what is possible to reduce the impact on us. I am also aware that many attacks are theoretical and may be used nowhere in practice, but that doesn’t mean we should ignore them either.

Online, I want to protect myself against passive state-level surveillance, but also targeted surveillance to some extent. Offline, I need to protect the devices I use. In case they are seized by the police, I want to prevent them from getting too much information so they get less material to charge us with. But if it gets to this, there’s many chances they are going to be able to associate my different identities.

Some may think with this threat model in mind I wouldn’t trust the server administrator, but this is a false dichotomy. What I don’t want is my data falling in the hands of an intruder such as the police overtaking the server. Server admins are legally required to give encryption passphrases in many jurisdictions, for one, but also mistakes are human and hacking into a server may not be so hard with the right amount of resources.

How does this work with XMPP?

First, this is not proper to XMPP: we don’t use our civil identities, we use pseudonyms. In these circles we mostly don’t know each other’s civil identities, and it’s not useful anyway. It’s the same online for example in the free software community, where there’s no reason why you’d need this information.

We use Tor, so the ISP and middle boxes don’t know where we connect to, and the XMPP server doesn’t know where we connect from.

We create accounts on populated public XMPP servers, and connect to them using TLS – which has been the default for a long time now – and use member-only / private (non-public) rooms to talk together, with OMEMO. We don’t know all of the people in the room but there is some kind of trust chain.

We’re not verifying OMEMO fingerprints as we may not know everybody in the room, and changing devices/OMEMO keys also causes pain regarding user experience when combined with FP verification.

On devices (PCs, smartphones), we use full-disk encryption where possible. As we generally use second-hand phones, the feature may not be available all the time. A pretty generic advice I give is to put a passphrase to the OS and also clear client logs regularly. It can be configured in Conversations on Android, I don’t know about iOS clients.

The baseline is: your smartphone is your weak point, even though most of us have one because it’s convenient. This is certainly the first piece that will incriminate you, if it’s not you or your friends doing so inadvertently.

What I’d like to improve in XMPP?

There are so many details that I have no clue about that could be used against me to correlate my different identities.

I use multiple accounts on Conversations, as well as Dino on the desktop for this use-case. Randomizing connections to the various accounts could be one thing to improve.

I don’t use Poezio for anything else than my civil identity, because Poezio isn’t very much used. Even though it may also be the case for Dino..

Currently in server logs, a few things can be used to identify a client, such as the resource string set by the client to something similar to clientname.randombits, or the disco#info which lists capabilities of a client. Both are actually stored on the server for possibly good reasons, but that’s always more information to identity somebody.

I remember developers asking for the resource to be easily distinguishable for debugging purposes. Having something à la docker container names should be good enough for this (a list of adjectives and names combined into random <adjective>_<name>). I am not entirely sure what to do about disco#info being stored.

A good point for public servers is that they don’t seem to store archives forever anymore (since GDPR? Or for disk-space concerns maybe). They will generally have 2 weeks / 1 month of (encrypted) activity which, I give you, may be enough in some cases to incriminate someone, but it’s probably better than logs that go back to -infinity.

The roster is also stored as plaintext on the server and can easily be taken by the police. Encrypted roster may not be as far as we imagine. There have been similar efforts done in Dovecot to encrypt the user mailbox with a user-provided passphrase. This wouldn’t prevent servers from recreating it based on activity when logged in, but that’s already more efforts required and many wouldn’t bother – leaving this data unavailable as plaintext by default.

On the client, I would like more private defaults. Tor support is a MUST, fortunately Conversations has it, and it’s possible to use it with Dino but one has to know how to set it up on their system and there’s no way to enforce using Tor, and it’s not shown whether it’s in use either. Same issue in Poezio.

Storing logs forever is also one thing that I find annoying. It can be configured in Conversations but it’s not limited by default. It’s hidden in Expert Setting as Never to delete messages automatically.

Dino doesn’t have any settings regarding logs. I’d have to clear them myself by going through the sqlite database (pretty technical already). Poezio has a use_log setting that stores every message (and presence depending on config), and it’s also True by default.

Interactions with OMEMO between non-contacts is a mess. Some servers have the mod_block_strangers module deployed as an anti-spam measure: when a user from such a server joins a private room, non-contacts will be prevented from fetching their keys. Dino creates the OMEMO node as only accessible by contacts (to prevent deanonymization in some Prosody MUCs). And Conversations doesn’t allow sending encrypted messages if it doesn’t have keys of all participants in a private room.

I am not even talking about OMEMO implementations (using OMEMO 0.3.0) which per the spec only encrypt the <body/> element in a message, leaking actual data depending on the feature used, or restricting the feature set greatly. This is fixed in the newer version of the spec but deployed nowhere at the moment.

I am also not talking about why XMPP and not say Signal, or Telegram. I have already talked about this in part in other articles but that may warrant its own article at some point.

This article only scratches the surface. There are many more details that would need to be ironed-out. And of course implementations need to make choices and can’t answer every single use-cases out there. I do wish Privacy was more of a concern though.

Where is “Privacy by default” gone? Somebody bring it back please.

by pep. ( at April 13, 2022 11:00


Computing International Call Rates with a Trie

A few months ago we launched International calling with JMP.  One of the big tasks leading up to this launch was computing the rate card: that is, how much calls to different destinations would cost per minute.  While there are many countries in the world, there are even more calling destinations.  Our main carrier partner for this feature lists no fewer than 59881 unique phone number prefixes in the rates they charge us.  This list is, quite frankly, incomprehensible.  One can use it to compute the cost of a call to a particular number, but it gives no confidence about the cost of calls in general.  Many items on this list are similar, and so I set out to create a better list.

My first attempt was a simple one-pass algorithm.  This would record each prefix with its price and then if a longer prefix with a different price were discovered it would add that as well.  This removes the most obvious effectively-duplicate data, but still left a very large list.  I added our markup and various rounding rules (since increments of whole cents are easier to understand in most cases anyway, for example) which did cut down a bit further, but it became clear that the one-pass was not going to be sufficient.  Consider:

  1. +00 at $0.01
  2. +0010 at $0.02
  3. +0011 at $0.02
  4. +0012 at $0.02
  5. +0013 at $0.02
  6. +0014 at $0.02
  7. +0015 at $0.02
  8. +0016 at $0.02
  9. +0017 at $0.02
  10. +0018 at $0.02
  11. +0019 at $0.02

There are many sets of prefixes that look like this in the data.  Of course the right answer here is that +001 is $0.02, which is much easier to understand than this list, but the algorithm cannot know that until it has seen all 10 overlapping prefixes.  Even worse:

  1. +00 at $0.01
  2. +0010 at $0.02
  3. +0011 at $0.02
  4. +0012 at $0.02
  5. +0013 at $0.02
  6. +0014 at $0.02
  7. +0015 at $0.03
  8. +0016 at $0.02
  9. +0017 at $0.02
  10. +0018 at $0.02
  11. +0019 at $0.02

From this input we would like:

  1. +00 at $0.01
  2. +001 at $0.02
  3. +0015 at $0.03

So just checking if the prefixes we have so far are a fully-overlapped set is not enough.  Well, no problem, it’s not that much data, perhaps I can implement a brute-force approach and be done with it.

Brute force is very slow.  On this data it completed, but as I found I kept wanting to tweak rounding rules and other parts of the overlap detection the speed became really problematic.  So I was searching for a non-bruteforce way that would be optimal across all prefixes and fast enough to re-run often in order to play with the effects of rounding rules.


As I was discussing the problem with a co-worker, trying to speed up lookups we were thinking about trees.  Maybe a tree where traversal to the next level was determined by the next digit in the prefix?  As we explored what this would look like, it became obvious that we were inventing a Trie.  So I grabbed a gem and started monkeypatching things.

Most Trie implementations are about answering yes/no questions and don’t store anything but the prefix in the tree.  I wanted to be able to “look down” from any node in the tree to see if the data was overlapping, and so storing rates right in the nodes seemed useful:

def add_with(chars, rate)
    if chars.empty? # leaf node for this prefix
        @rate = rate
        add_to_children_tree_with(chars, rate)

But sometimes we have a level that doesn’t have a rate, so we need to compute its rate from the majority-same rate of its children:

def rate
    # This level has a known rate already
    return @rate if @rate

    groups =
        children_tree.each_value.to_a         # Immediate children
        .select { |x| x.rate }                # That have a rate
        .combination(2)                       # Pairwise combinations
        .select { |(x, y)| x.rate == y.rate } # That are the same
        .group_by { |x| x.first.rate }        # Group by rate
    unless groups.empty?
        # Whichever rate has the most entries in the children is our rate
        @rate = groups.max_by { |(_, v)| v.length }.first
        return @rate

    # No rate here or below

This algorithm is naturally recursive on the tree, so even if the immediate children don’t have a rate they will compute from their children, etc.  And finally a traversal to turn this all back into the flat list we want to store:

def each
    if rate
        # Find the rate of our parent in the tree,
        # possibly computed in part by asking us
        up = parent
        while up
            break if up.rate
            up = up.parent

        # Add our prefix and rate to the list unless parent has it covered
        yield [to_s, rate] unless up&.rate == rate

    # Add rates from children also
    children_tree.each_value do |child|
        child.each { |x| yield x }

This (with rounding rules, etc) cut the list from our original of 59881 down to 4818.  You can browse the result.  It’s not as short as I was hoping for, but many destinations are manageable now, and thanks to a little bit of Computer Science we can tweak it in the future and just rerun this quick script.

by Stephen Paul Weber at April 13, 2022 05:30

April 10, 2022

Maxime Buquet

Updates from the Poezio ecosystem

Releases have happened recently that revolve around Poezio, a TUI (Terminal UI) client for XMPP, including Poezio itself, its backend XMPP library Slixmpp, and also the poezio and slixmpp plugins for OMEMO.

Many bug fixes and improvements

Poezio example screenshot 2 Poezio example screenshot 1 Examples of screenshots. Thanks jonas’ for the blue theme!

Mathieui has already made a proper release note for Slixmpp and I invite you to read it! It includes many bugfixes of course, and internal changes around async handling, that may reflect on some of the APIs you are using.

Poezio has also seen many improvements.

Internally, for one, our default branch has also been moved to “main”, many type hints have been added, implicit casts (safeJID) have been removed, lots of event handlers and calls are now async, APIs from Slixmpp are being used instead of redoing our own, many refactoring, various performance improvements.

Pypy3 support was removed because it was causing many users to use the cffi module specifically implemented for pypy3 instead of the more performant C implementation. For those who are running from sources and not using the update script, don’t forget to run make to build the C module.

A license change has happened, and Poezio is now under GPLv3+! While I am not exactly in favour of intellectual property1, this is a straightforward lever we have against capitalism2. Poezio being a prime resource for Slixmpp examples, GPL code should reasonably ensure that the 4 freedoms reach end-users. In practice, this should allow for poezio-omemo to be merged into Poezio. I am now personally hoping for Slixmpp to change its license as well.

And other changes more visible to users! To name a few, quality of life improvements such as xmpp:...?join URIs handling in /join, impromptu rooms creation is now more reliable and creates rooms with shorter names, and tab names in the activity bar can be colored using Consistent Color Generation by setting autocolor_tab_names to True. Read more in the changelog.

Poezio colored tab numbers The tab name color on top can also be reversed (foreground/background) in the theme to look the same as the activity bar below.

Plugins have seen changes as well. A new untrackme plugin replaces the now deprecated remove_get_trackers. Link Mauve has also developed a sticker plugin (to send them), similar in essence to what Movim has been doing for ages. Rich presence (activity, gaming, mood and user tune) has been removed from Poezio core and moved in the user_extras plugin. And again many fixes.

Poezio sticker plugin in action!

Many of these fixes have been realized by mathieui, who is by far the biggest committer on the release, and in general probably the person with the best understanding of the project. Thanks also to louiz for providing the infrastructure all this time, and to eijebong, Ge0rG, Kaghav Gururajan, kaliko, Thomas Hrnciar, jonas’, and southerntofu for the many patches.


Archive handling (MAM) was already in the previous release, but has been reworked and should now be more reliable.

When opening a tab, Poezio will fetch 2 screen pages worth of messages if it has no logs for this tab. Archives are automatically stored locally if configured (default), in which case they won’t be re-downloaded but read from the local copy directly the next time they’re requested.

To read older chat messages in a tab, just scroll up with PageUp and Poezio will fetch more automatically if it needs to.

This is configurable with options that have been introduced such as mam_sync or mam_sync_limit to enable/disable the use of MAM and how many messages to fetch at most. And use_log also configures the fact that archives are stored locally.

Some work around storing message IDs – that our log format doesn’t do – will be needed in the future to allow for easier message deduplication.

End-to-End Encryption

The Poezio E2EEPlugin API has been improved to accommodate changes in poezio-omemo, slixmpp-omemo and changes of the OMEMO backend library. Two plugins which are also seeing changes!

Heartbeats are now supported. Heartbeats are meta-messages which transfer only cryptographic key material (nothing else) and are used to strengthen OMEMO’s forward secrecy. This is particularly relevant on clients like Poezio that can stay running in the back for some time, receiving messages without replying.

Some other changes include colored fingerprints using the Consistent Color Generation document – such as specified in the current (0.8) OMEMO spec – and sending encrypted media (aesgcm URIs).

What hasn’t changed is that this plugin lacks a UI and trust management. Hopefully this should come soon, with a little motivation to do UI work.

What comes next

All in all, there aren’t (m)any revolutionary changes, but with these releases come many fixes for paper cuts that hopefully make users happier. This makes me think that even though Poezio is far from being perfect, there doesn’t seem to be many important things missing.

There are however changes that would require a lot of refactoring, such as a multi-account feature, or easier maintenance in general.

We have decided to start migrating Poezio to Rust, in part to be able to refactor the project more easily, and also because it’s a language we’ve come to appreciate over the years with experience in other projects, and more specifically with xmpp-rs, an XMPP library in Rust.

All of this will happen right after the release, and we invite interested people to join the effort!

P.S.: I am looking for poezio screenshots with various setups to display in public places, under a free license. Please send me your screenshots in relatively high quality at blog at And don’t forget to ask pixels appearing on the image for permission!

  1. TODO: write about this. A TL;DR would certainly be “abolish intellectual property, and private property in general”. ↩︎

  2. When they don’t decide to ignore it and give us the finger. ↩︎

by pep. ( at April 10, 2022 11:00

April 07, 2022

Erlang Solutions

Using Elixir and WhatsApp to Fight COVID19


Discover the inside story of how the World Health Organisation’s WhatsApp COVID-19 hotline service was launched in 5 days using Elixir. At the beginning of March 2020, launched the world’s first WhatsApp-based COVID-19 response for the South African Ministry of Health. The service was designed, deployed, stress-tested, and launched.

In 5 days. It scaled, before any kind of public launch, to 450K unique users on the first day and has since grown to serve over 7.5 million people in South Africa and is core to the national government’s ongoing COVID-19 related communication and support strategies. Simon De Haan, CTO and Co-Founder of Turn.Io Joined us at ElixirConf EU Virtual 2020 to share the case study, learnings and takeaways from this experience. Watch the video or read the transcript below to find out more. and apps healthcare apps

My name is Simon de Haan. I’m based in the Netherlands. I am the CTO and Co-Founder of is a software as a service company. We’ve spun out of a nonprofit in South Africa and the nonprofit is called We have a decade-long history of using mostly SMS and similar texting platforms for social good initiatives in areas such as health and education, employment, civic engagement, and things like that.

Since 2017, we’ve been working with WhatsApp specifically. A turn is a software as a service tool, like I said earlier, for teams to have personal guided conversations that improve lives at scale.

Now, practically what that means is that we help social impact teams scale their work significantly, while not being overwhelmed. The strategy here is quite simple. We connect teams to their audiences over WhatsApp. We help prioritize the key conversations that need the most urgent attention, and we help guide those conversations towards outcomes. Then we track whether or not that’s happening.

If you’re thinking about social impact teams, what type of teams is that? That’s NGOs, nonprofits, social enterprises. In the U.S., they’re called, for example, B Corps, but also very large humanitarian organizations like, for example, the WHO, which I’ll be talking about shortly.

An example of an organization or initiative that uses is MomConnect in South Africa. That’s the South African National Department of Health’s Maternal Health Program, which we launched as part of a WhatsApp pilot in 2017.

The Department of Health in South Africa needed to be in regular contact with pregnant women. They needed to be able to triage questions coming in and give guidance according to national policy and keep track of the progress being made with regards to clinic visits, inoculations, nutrition, and later on early childhood development. Now, this started in South Africa. The nonprofit that we’ve spun out of is also based in South Africa so that’s why some of our roots are there.

What this looks like for these kinds of conversations, just to give you an idea, is that, for example, we can send people reminders that their HIV medicine, the ARVs, are available at a clinic and it will help prevent the transmission of HIV to a baby during childbirth. This way you get all sorts of questions that come back.

For example, this is a real example, but it’s not a real profile picture. Questions that you would get are things like, “If I am HIV positive, is it possible to breastfeed my child?” What we do is apply natural language understanding to automatically triage the questions coming in. Here we’ve identified it as a question, and then we’ve matched it with an appropriate answer that’s come back that is then immediately sent back to the mother as the relevant answer for her question.

Some other examples are things like mixed media. For example, a mother has received some medicine from a clinic, and they’ve bought some medicine from a store as well and they’re not quite sure how to go about this. We help them identify what is this question about. In this case, it’s vitamin adherence.

We help the triage process to figure out what is important, what needs to be attended to, who’s the best person who can answer these questions. Just to connect them to a real human, in the case of MomConnect.

There are very specific things around, for example, behaviors. For 10 weeks pregnant, there’s some very clear guidance on what you can do. This is an example of washing your hands. It relates to maternal health. But I think all of us have become fairly experienced over the last six months with regards to the importance of washing hands and things like that. The software is able to track specific messages that relate to specific behaviors. Again, this is using machine learning models and natural language understanding to just basically do that matching.

Another example here is when we’re sending a reminder for a mother where we tell them, “Hey, your child is so and so old, you need to send them or schedule your inoculations or your vaccinations. This is important why, and this is where you can do it.”

Then we get a message back “Hey, we already went to the clinic on the 1st of August, and things are fine.”Then we can track that as a, “Okay, this is a six-week immunization, that step’s been completed and everything is on track, there’s nothing else needed.”

This is what we started with originally in South Africa with MomConnect, it has its roots in SMS, and we started introducing that as a pilot with WhatsApp. Just out of an anecdote, we were seeing the behavior between SMS and WhatsApp is entirely different immediately when we started allowing WhatsApp. This is hard to comprehend for people in the U.S., for example.

But as soon as we started allowing messaging over WhatsApp, we saw that the volume ratio of SMS to WhatsApp was 1 to 10 immediately, which was quite incredible. That also informed our decisions moving forward, “Okay, this is a different thing than what we’ve done before, we need different infrastructure to address this kind of volume and usage patterns.”

Launching COVID Connect in South Africa

That worked and building on those experiences, we launched COVID Connect in South Africa, which is the world’s first COVID-19 hotline for a government on the WhatsApp Business API. That reached 500,000 unique users before the official launch and it has since grown to 8 million subscribers.

This was the world’s first government COVID-19 hotline where people could receive accurate information on the state of COVID-19 in their country, what the guidance was,etc. That led to our work with the WHO. Based on previous work, we had a very long-standing relationship with the WHO on various initiatives. So, things came together with Facebook, WHO, and ourselves and we were asked to help launch the service in pretty much the space of a week. It was the world’s first launch of this size.

What made it pressing was that in the area of COVID-19, misinformation can spread faster than the virus itself. So, when there’s an entirely new situation like, for example, COVID-19, what we learn, what science knows about it, what the best guidance is can change on a day-by-day basis.  For instance, I’m based in the Netherlands. The guidance of this past week is different again from the last two weeks, and is different again from three months ago.

Having a system out there that’s accurate, that people can access and know what their latest stats are, what the latest guidance is, what the latest understanding of the viruses is, is just extremely important in a global pandemic like we are still finding ourselves in.

The use case is quite simple. For those of you who do have WhatsApp, you can scan the QR code with your WhatsApp client or with your normal iOS camera, and it’ll launch the service for you. You can interact with the service, and you can get live updated case numbers per country which integrates with the dashboards from John Hopkins and retrieves that information there.

You have the latest news relating to COVID-19 coming from the WHO. You’ve got information on basically how to combat misinformation around COVID-19. There’s been plenty of that around, like, what works, what doesn’t work, playing on people’s fears, and things like that. The bot fulfills a substantial role there as well. While I’m talking, feel free to interact with the service. It should be responsive for you. If not, then well, you all know what a technical demo is like.

On launch day, this is largely what it looked like. And first, there was an announcement by Director-General Tedros, which you can see the first spike there where they announced the service. Then there was a second announcement a few hours later from Mark Zuckerberg, who posted it on his Facebook page. Apparently, Mark has a lot of followers because immediately that caused quite an increase in messages a second being processed through the system.

Now, this is on a single cluster, this graph. We had multiple clusters in various zones around the world to deal with the traffic. So, the story here is about what did it look like to build a service like this in such a short amount of time? what were the tools worked? what helped? what are our learnings? Hopefully, you’ll take something away from this talk that helps you in your day-to-day work.

I’m hoping we don’t have many more of these global pandemics where these learnings would apply, but I’m hoping that there are more learnings for you that you can take away in your day-to-day work as well.

This is the timeline of things. COVID-19 work started on the 9th of March. We had a soft launch around the 15th of March, which was, to be honest, quite accidental. I’ll talk a little bit more about that later. And then the official launch was on the 18th of March. Now, that was the South African version, the one that scaled to now about eight million.

The work for the WHO started on the 11th. The WHO infrastructure was ready on the 17th. The soft launch was around the 18th, somewhere around there, and the public launch was on the 20th of March. The infrastructure for the WHO was on AWS. So, part of the work there was not necessarily building the application, but just making sure the infrastructure was up and running and ready to go.

The amazing thing was that 10 million people used the service in just over 48 hours, which is a testament both to the reach of WhatsApp, but also of just the tools and infrastructure and the Elixir, and Phoenix framework that made this possible for us. We were proud and still are proud of the service we’ve built. And this is definitely for us the first time that service has scaled to these numbers in such a relatively short amount of time.

An Elixir based app to support Mental Health

So, I have an announcement as well. Again, this is me playing on one of our learnings around soft launches that I’ll elaborate on a little bit later. World Mental Health Day is coming up and the WHO is launching a service specifically for that.

Given the past experiences of emergencies, everyone knows the mental health and psychosocial support help that’s needed in time of an emergency. Especially during COVID-19, with everyone either being in quarantine or isolated in some kind of way, expectation is that the need for psychosocial support and mental health support is only going to increase.

For the WHO, as an extension of the original COVID-19 work we’ve done, we’re also launching a digital guide to stress management. Again, you can scan the QR code, or if you’re already using the service, just type the word breath and it’ll launch the service for you.

Basically, this just takes you through several exercises for basically stress management. So, you can type Start, and it’ll start your stress management journey and take you through a few days of just basically grounding stress management tools to help you deal with anything stress-related, which may be partially built on the Elixir stack.

This is the first time we’re actually building things that are more stateful than we’ve done before using this infrastructure and so this is a new thing. This hasn’t been public. There’s been a few press releases, but it hasn’t received a big press push yet. So many of you will be seeing this for the first time. I’m hoping that that will be of use to many people.

The way this is built…and as I said earlier, is a bit of a departure from earlier designs, it’s far more stateful, is more complex. This is our first run with this approach. It allows you to turn to build more threaded sequential interactions, text-based interactions with the service. You can see on the left, there’s a bunch of actions, on the right, there’s the conversation that’s being modelled. I’ll touch on that a little bit later as well.

All the other stuff we’ve done was very stateless and that also made it quite a bit easier to scale. This is the first time that we’re doing more stateful things and so we’re confident it’ll hold up. But it’s a new piece of technology and that’s always exciting.

The stack is actually quite simple. For the WHO, it’s just Kubernetes on Amazon Web Services provisioned with Terraform. We don’t have any specific alliance or preference for any of these large hosters. Kubernetes still feels very, very academic to us but it provides a useful base platform for deploying, almost treating it as an operating system. And it’s worked well for us so we don’t have any complaints there.

So other than that, it’s Postgres 9.6 Elasticsearch, Faktory which is a queue worker, and then Elixir 1.10.4., on OTP 22. I don’t know if Elixir is boring. But the other ones, certainly Postgres, are a bit of a boring technology. We love Postgres. It’s extremely solid, it’s a very reliable workhorse for the work we’ve done. It’s performed incredibly well.

Faktory. Those that don’t know and maybe those from the Ruby community would be familiar with Sidekiq, the job worker. Faktory is from the same author and it’s a language-agnostic job worker that works very well with…well, certainly we found it working very well.

Digging a little bit further into the stack, we’re using the Phoenix web framework, and we’re using React on the frontend. Phoenix is working extremely well for us. We’re not doing anything relatively new or fancy there with regards to the new releases stuff mostly because a bunch of the code that we’ve written for this predates that. These are just deployed as Docker containers running the Phoenix app. Then there’s a React frontend which is managed and deployed via Netlify.

Then we use Absinthe for GraphQL on the backend and Apollo on the frontend. All of that communication happens via WebSockets. Data is managed with DataLoader using GraphQL. Arc for storage to either S3 or Google Cloud Storage depending on the hosting environment we’re working on.

We’re using the combination of Quantum and Highlander to schedule jobs like, for example, cron, or recurring jobs. We had an issue at one point where we almost issued a distributed denial-of-service on ourselves because it was quite difficult at times to limit the number of processes that Quantum ran on a schedule to just a single node in a cluster. But Highlander solved that beautifully for us.

So, if you’re looking for a way to combine jobs running on a schedule in a clustered environment, but you only want it running on one node at a single time, Highlander will help you there. We’re using a library called ExRated just for rate-limiting all API endpoints and FaktoryWorkerEx to talk to the Faktory job server.

The stack itself looks like this: we have the load balancer, which is generally provided by the hosting environment, SSL, and all that stuff is terminated there. Then the Phoenix app which is, you know, just a straight-up normal Phoenix app, there’s nothing really special about it. Those are all auto-scaled within limits based on CPU thresholds using Kubernetes. Then automatically clustered with lib cluster using the Kubernetes strategy. So that works extremely well for us. Both the FaktoryWorkers and the Phoenix app are all joined in one big cluster.

Then the WhatsApp Business API for those that aren’t familiar with it, it’s several Docker containers that you need to run that take care of all of the end-to-end encryption and things like that. And for us, for the WHO, we run this with 32 shards on the WhatsApp Business API. And there’s a bunch of stateful services like Faktory server, Elasticsearch, and Postgres.

This stack was replicated in multiple zones around the world to ensure load balancing. And as many of you know or maybe already have seen that the QR code just points out a URL. And so, WhatsApp conversations can be started with a URL. And what we did was we used a Bitly link to round-robin between different clusters to spread the load.

So, if you open the link, you first went to essentially a serverless cloud function which then hand-picked one of the various clusters around the world and assigned you to that one. Which helped us manage the load across these various installations for the launching of the service.

What worked

I think what’s impressed us is just the ease of clustering of BEAM nodes. Many of you are working with Elixir or have been working with Elixir for, I don’t know how many years already. This is probably old news.

We come from a Python background and so is our team’s first production of Elixir environment. Some of these things that were, really hard problems in Python just don’t exist in Elixir. So simple things like publishing WebSockets via GraphQL subscriptions from any BEAM node are just so easy. It just feels almost unfair if you’re coming from an environment that doesn’t have that clustering idea built in.

So subconsciously, there’s a whole set of problems that you’re almost inclined to not even approach simply because the language doesn’t allow that for you, or the underlying infrastructure doesn’t allow that for you.

For us, working with Elixir in many ways feels magical, not in a bad magical, like code magical way. But just like, “Wow, there’s, like, this whole new world of opportunities that we previously weren’t thinking about, that now are available to us.” Which is quite incredible.

The other thing that worked well is network control. We’ve stopped worrying about processes. is a heavy network application. In some of our earlier Python applications, we had things like long-running network connections that were often problematic and forced us to make things asynchronous.

Then if you’re running things asynchronously, you introduce a whole bunch of other different problems like backpressure, or rate-limiting, that just become difficult because you still need to communicate between these various systems. Elixir as a language made those problems much, much simpler for us to reason about. I feel like it gives us a really good set of tools to manage those problems.

Another thing that worked well is monitoring. Now, this is not specifically Elixir I guess, but there are some great libraries and the BEAM VM allows you to introspect your processes very well. So, if you’re gonna run things at scale, or high volume, really invest in your monitoring and your observability.

Prometheus and Grafana are immensely valuable and will highlight upcoming problems. We use Zipkin to just get insights into delays when they happen.

Parts of Turn are pretty distributed as I was showing earlier. So being able to see which code paths are slow, Zipkin highlighted that to us. Now, on top of that, with Prometheus and Grafana, escalations through PagerDuty were very straightforward and worked extremely well.

Automation worked well. Again, many of these things are almost…I don’t know, just very simple if you mention them. But these are still the kinds of learnings that you do when you build a system like this or the learnings that you get.

So, if you’re running a small team, really invest in automating as much as possible. The value of a good CI/CD setup compounds over time. Our team size at launch was tiny. Right now, we’re about seven developers, I think. But automation, it felt like it added another team member to our team or several team members that we didn’t have to worry about whether our deploys were going through, we didn’t have to worry about versioning things.

And so many of the things that historically, I would have a team for to do automation, and the tools that are available now just didn’t require that.

So right now, production releases are built and deployed within three minutes of a tagged commit. QA releases are built and deployed on each commit. As a result of automation, our deploys are smaller and less stressful as a result.

Feature Flags, identify the things that you can live without, and make it easy to turn them off. For launch, we disabled live Elasticsearch indexing. This is both the thing that worked well and that didn’t work well. So, we disabled media support, we kept everything within the service as stateless as possible.

For Feature Flags, Elixir’s pattern matching made this very easy in the codebase. There are specific things we don’t want this to happen, set a flag, skip that code path entirely, and then just continue. So that’s what allowed us to disable Elasticsearch very easily. I’ll touch a little bit more on Elasticsearch later.

Load Test. So, load tests all the critical paths extensively. Make it easy to do so repeatedly so that you can track the effectiveness of the changes that you’re making. Again, these things are very logical but if you’re under stress, and you’re needing to deploy a thing within a couple of days, for a global audience, these are the things that you likely will forget to do but do need to pay attention to.

We load tested the application to 1,800 requests a second which was on a cluster, which is more than double our expected maximum. With that, we ensured that the response times remained below 100 milliseconds. We used to run those load tests.

Faktory, I touched on it earlier, the job server has been extremely reliable for us. For one of our clusters, it’s processed 1.7 billion events. Historically, we would have defaulted to something like RabbitMQ, but Faktory gives us retries with exponential backoff out of the box, which is extremely convenient. I know you can build these things on RabbitMQ and it’ll work well. It’s just one of those things that we now didn’t need to build. We’re very grateful for Faktory and then the team that did that.

What didn’t work

So now some things that did not work. Again, we touched on that a little bit earlier, the Feature Flags. So Elasticsearch is a great piece of software, but in our experience, it’s difficult to run from an operational perspective. We are confident we could have done it but we didn’t want to have to focus on that and so we disabled it for launch again.

It’s kind of a what worked and what didn’t work combination here. It’s just one of those things if you don’t need to worry about it, don’t worry about it for launch, and make sure you could turn it back on when you do need it.

Other things that didn’t work were half automated. I say this, and I’ve heard other people say it as well. Broken gets fixed, but shitty last forever. Some operational things were not automated the way they should have been in Terraform.

There are no squeaky-clean production environments. It just does not exist. Certainly, if you’re not under massive time pressure to deliver something because there’s a global pandemic. But that said, one can certainly work towards making sure things are not messy.

The reality is that some things will always be messy in a production system and it’s really hard to detangle those things. So right now, six months in, we’re still trying to detangle some of the shortcuts that we took with regards to deployments that aren’t working in the way that we would like them to work.

If you can avoid half automating things, then do so. Sometimes it’s better not to automate them, and then bite the bullet and do it well than do it half. Because if you’re doing it half, it’s always gonna last longer than you would expect.

Key learnings

Soft launches are vital. This was accidental, to be honest. So, I’m saying here, we always made sure that we were seeing high volumes before any major public launches. In a way, that’s true for every single launch except for the first one. The first one was an accident where a Department of Health representative tweeted about services pre-launch.

What that did was that basically, it gave us about 24 hours to stress test the system and see that it’s working, observe it in production before any media attention was focused on it. You will help your team to do this because it relieves the stress of big bang launches.

Soft launches are vital. So, everyone who’s now testing our stress guidance on the WHO service, thank you very much. If there are bugs, you’re helping us catch them before actually going live globally.

The other thing that we’ve learned, certainly for first launches is to keep things simple. Everyone knows this, everyone repeats it, it’s hard to do in production in real-life use cases. The reality is that simple applications are just way much easier, way easier to scale. So, we only launched with a single language, zero stateful interactions, it was just keyword response things, no media support, and no search index because we turned that off.

There were quite  several other services that were launched during the same time, national services, government services, and regional services, pretty much all of them suffered a significant amount of downtime, simply because they couldn’t keep up with the load or they weren’t prepared. The WHO service was the largest one at the launch of any of them and was also the only one that stayed up the whole time.

For a large part that is extremely well for us. Phoenix worked extremely well. But part of that is also just strategy, like, keeping your application simple.

The other thing is a plan for surges. We defaulted to being at 5% of capacity at all times. So, traffic tends to be quite bursty as a result of television coverage or social media activities. Having capacity at 5% gave us the headroom where we could scale up what was needed just to deal with the surges and the demand.

In a way, this spare capacity could feel like a waste. On the other side, the spare capacity made sure we were able to scale up and provide a valuable service to people in a hopefully once in a century global pandemic.

The other thing is to ask for help. We’re a very small team and we needed help to pull this off. A huge amount of credit goes to the team of experts at both Amazon Web Services and WhatsApp, who worked alongside us for the biggest installation of this WhatsApp Business API at launch.

Practically what this looked like is that we had multiple WhatsApp groups open, constantly open Zoom calls, and exchanging insights while we were all observing the system and how it worked. So ask for help even if you’re an expert in a field. Still ask for help. Don’t go for it solo.

Now, the other thing is, I think there’s a really big thank you that we are constantly wanting to express as a team. So, when this was going on, and you all saw the timeline, it was just a couple of days, we didn’t have a lot of time to prepare this.

We sent out an email to several smaller teams who’d asked to be on standby should help be necessary. Mostly just saying, “Hey, you know, you’re delivering this piece of software, or you’re responsible for this piece of software, or you’re part of the team that manages this. It’s a critical piece of infrastructure for the world’s first global response to the COVID pandemic on WhatsApp. Would you please be…at least be aware of the fact and make yourself available should anything break or need attention?”

This was part of us reaching out for help in the community. And it was incredible. Everyone showed up within 24 hours. So, this is just a testament to the Elixir community. Many of you, I suppose, in many ways as well.

For this launch, we just wanna say a really big thank you to Dashbit for the Elixir advice. Jose jumped on a call, the team helped out, did some reviews, and gave some advice on how we could optimize things.

Sentry for the error reporting tools. We couldn’t have done it without that. Contribs for the Faktory job worker. 1.7 billion events on a single cluster is not a small number. And the team at Rasa for the natural language understanding.

Now further than that, also the Elixir community just for all the tooling that made this possible. The various clients we’re using for various things, rate-limiting, all of the stuff that all of this relies on. Phoenix, Ecto, Elixir itself which just released 1.11. which we’re very excited about.

If you’d like to learn more about Turn.Io you can visit their website. If you’d like to find out more about how Elixir can empower you to have massively scalable solutions ready-for-market in rapid speed, talk to us. And, if you’d like to hear the latest Elixir case studies, features and frameworks join us at ElixirConf EU 2022. 

The post Using Elixir and WhatsApp to Fight COVID19 appeared first on Erlang Solutions.

by Erlang Admin at April 07, 2022 10:36

April 06, 2022

Ignite Realtime Blog

inVerse Openfire plugin 9.1.0-1 released!

Earlier today, version 9.1.0 release 1 of the Openfire inVerse plugin was released. This plugin allows you to easily deploy the third-party Converse client in Openfire. In this release, the version of the client that is bundled in the plugin is updated to 9.1.0!

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page.

1 post - 1 participant

Read full topic

by guus at April 06, 2022 17:54

JSXC Openfire plugin 4.4.0-1 released!

Earlier today, version 4.4.0 release 1 of the Openfire JSXC plugin was released. This plugin allows you to easily deploy the third-party JSXC client in Openfire. In this release, the version of the client that is bundled in the plugin is updated to 4.4.0!

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page.

For other release announcements and news follow us on Twitter

1 post - 1 participant

Read full topic

by guus at April 06, 2022 17:39

Openfire Message of the Day (MotD) plugin version 1.2.3 released

Earlier today, version 1.2.3 of the Openfire Message of the Day plugin was released. This version adds a German translation to the admin console (thank you, Stephan Trzonnek, for providing the translation)!

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page.

For other release announcements and news follow us on Twitter

1 post - 1 participant

Read full topic

by guus at April 06, 2022 13:25

REST API Openfire plugin 1.8.0 released!

Earlier today, version 1.8.0 of the Openfire REST API plugin was released. This version adds a new endpoints for readiness, liveliness and cluster status!

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page.

For other release announcements and news follow us on Twitter

3 posts - 2 participants

Read full topic

by guus at April 06, 2022 13:19

April 05, 2022

The XMPP Standards Foundation

The XMPP Newsletter March 2022

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of March 2022.

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, especially throughout the current situation, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

Newsletter translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

XSF Announcements

XSF and Google Summer of Code 2022

  • The XSF has been accepted as hosting organization at Google Summer of Code 2022 (GSoC). If you are interested in participating as a student, mentor or as a project in general, please add your ideas and reach out to us. The contributor application period has begun already, so be quick!
  • XMPP Newsletter via mail: We migrated to our own mail-list server and stopped using Tinyletter. Its read-only and you will receive the XMPP Newsletter on a monthly basis. It also eliminates the privacy concerns with Tinyletter.

XSF fiscal hosting projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects:



The Profanity devs posted a quick guide on how to use OpenPGP for XMPP (OX).

JMP’s Newsletter announces a new client for Android (based on Conversations) that has a focus on improving UX for users of standards-compliant gateways.

JMP Cheogram

The Mellium Dev Communiqué for March 2022 has been released! This release includes changes to the sidebar in the Communiqué TUI client and improvements to various packages in the main module.

Software news

Clients and applications

Gajim development news: March brings a new issue reporting system and many performance improvements for both Gajim and python-nbxmpp. Gajim’s OMEMO plugin comes with some improvements as well. Last but not least, there has been a security issue in python-nbxmpp, which has been fixed in version 2.0.6.

Openfire Pàdé 1.5.7 and 1.6.3 and Openfire Pàdé 1.6.2 have been released.

Profanity 0.12.0 has been released, with in-band account registration and user mood support, new theme, improved OX user experience (as the article above shows) and a slew of fixes and polished features.

Psi+ 1.5.1615 and Psi+ 1.5.1618 have been released.

Conversations 2.10.5 is out, bringing better call reconnections after network switches, showing caller JID and account JID in incoming call screen, adapting the file storage locations per the new Android 11 requirements and a security fix affecting file downloads. Note that the F-Droid version lags behind, due to unrelated issues, but is out and includes only the security fix. Also announced was that accounts on are free from now on.


Jackal 0.58.0 has been released and added the BoltDB repository type.

After three years of development Prosody 0.12.0 has been released. The update covers XMPP Compliance, mobile and connectivity optimizations, updated HTTP file sharing, improved audio/video calling support, Direct TLS and many more - congratulations!


Tigase XMPP Server 8.2.0 has been released! Biggest feature is the support for MIX protocol, which offers better group chat experience, especially on mobile devices. Group chat (MUC) was not left be and received a lot of fixes as well. In addition we improved server-to-server connectivity, added option to store certificates in the repository (really helpful in cluster deployments) and more!

The Ignite Realtime community is happy to announce the immediate availability of a maintenance release 2.2.3 of the GoJara plugin for Openfire. GoJara provides an implementation of XEP-0321 “Remote Roster Management” and helps out with monitoring Spectrum 2.


slixmpp version 1.8.1 has been released, fixing a compatibility issue with the python standard library due to the defusedxml introduction in the 1.8.0 release.

python-nbxmpp versions 2.0.5 and 2.0.6 have been released, fixing a security issue in resolving websocket URIs.

Smack 4.4.5 and 4.5.0-alpha1 has been released.

Extensions and specifications

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).


The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs proposed this month.


  • Version 0.1.0 of XEP-0462 (PubSub Type Filtering)

    • Accepted by vote of Council on 2022-02-09.
  • Version 0.1.0 of XEP-0463 (MUC Affiliations Versioning)

    • Accepted by vote of Council on 2022-02-16.


If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.


  • Version 1.6.0 of XEP-0115 (Entity Capabilities)

    • Mention preimage attacks explicitly (ssw)
  • Version 1.4.0 of XEP-0156 (Discovering Alternative XMPP Connection Methods)

    • Remove DNS _xmppconnect method due to security vulnerability. (tjb)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable (formerly known as Draft)

Info: The XSF has decided to rename ‘Draft’ to ‘Stable’. Read more about it here.

  • No XEPs advanced to Stable this month.


  • No XEP deprecated this month.


  • XEP-0008 (IQ-Based Avatars)

    • Obsoleted due to two superseding specifications (egp)
  • XEP-0038 (Icon Styles)

    • Obsolete due to the omnipresence of Unicode emoji, as well as Bits of Binary stickers. (egp)
  • XEP-0051 (Connection Transfer)

    • Obsolete because this feature has been merged into XMPP core, see RFC6120 section, which describes the stream error. (egp)
  • XEP-0138 (Stream Compression)

    • Obsolete due to security vulnerability. (tjb)
  • XEP-0229 (Stream Compression with LZW)

    • Obsolete due to security vulnerability. (tjb)

Call for Experience

A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

  • No Call for Experience this month.

Spread the news!

Please share the news on other networks:

Here you can subscribe via email. It is read-only and only the Newsletter will be send to you on a monthly basis.

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Therefore, we would like to thank Adrien Bourmault (neox), anubis, Anoxinon e.V., Benoît Sibaud, cpm, daimonduff, emus, Ludovic Bocquet, Licaon_Kter, MattJ, nicfab, Sam Whited, singpolyma, TheCoffeMaker, wurstsalat, Ysabeau, Zash for their support and help in creation, review, translation and deployment. Many thanks to all contributors and their continuous support!

Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations


This newsletter is published under CC BY-SA license.

April 05, 2022 00:00

April 04, 2022

Maxime Buquet

Interoperability in a “Big Tech” world

As an answer to the announce of the EU parliament to force some service providers to allow others to interact with them, that we call “interoperability”.


In theory, interoperability is a way to allow different networks to communicate together. And it’s great, it’s even important for emancipation, empowerment of users.

I still have concerns though because I think in general it makes user experience (UX) more complex, and even screws up the various efforts applications make in this domain1.

This law says it’s going to force big companies (Facebook, Apple, etc.) that it calls “gatekeepers” to open their services to other networks.

In practice these networks will be accessible via bridges. A bridge is the software layer that handles the connection between different networks. It understands the language (protocol) that these applications speak, and translates from one to the other.

These bridges already exist for various open, and proprietary protocols like WhatsApp. A problem is that it is in WhatsApp’s interest to ensure their users don’t use any other applications than the ones they provide. As soon as WhatsApp realizes that a bridge works, they will quickly change something in their software to ensure it doesn’t anymore, and may also ban accounts that were using the bridges, etc.

Power struggle

What implications are there from small networks’ perspective? And as it would also impact users if it’s not beneficial for networks, what implications are there for the users?

How will these platforms now handle questions of identity at their doors? Usually they would ask a phone number, an ID card and whatnot.

Now that they don’t have control over the whole network, will users have to register credentials with the bridge to communicate? Often that’s used as an excuse to protect themselves from spam, and it may indeed, but it also has various harmful effects on users.

It’s not because the law now says that they have to allow interoperability that they will magically adopt good practices2. They are still sharks and will still be in position of strengh over other players on the network.

With their important userbase these platforms would be able to impose certain practices to all who want to communicate with them. It’s already been the case when Google (gtalk), Facebook, and Microsoft were using XMPP, and it’s possible to observe this behaviour also in email with Google (gmail) and Microsoft.

In summary, pretence of debate during standardization – if it even happens – caused by this position of strengh.

A power struggle already here?

Some say this power struggle already exists, and it’s true. To what extent do these companies influence our protocols and applications already? I wouldn’t know.

I would say many features and UX come from them. Because of their huge userbase, lots of us active in the XMPP community tell ourselves we need to at least be able to equal them to be as attractive, and that’s how it gets in the protocol.

By forcing these companies to open up – which will also be turned upside down as a marketing strategy to show their goodness by the way – won’t this influence grow even more in our spheres? To what extent?

In email for example, if you’ve had the chance to host your own server, you certainly have had to cross swords with, very influent in this area, where a good chunk of your contacts are hosted.

Google easily abuses its position of strengh to impose various anti-spam measures, and other practices which they pulled out of their magic hat (they might have asked their friends over at Microsoft and co). And if one day they wish to stop communicating with you, meaning you lose access to your contacts, you have no say in it.

To clarify the use of the word “force”: These regulations aren’t in the interest of the companies we’re talking about. Let me remind you as I’ve said above that as we speak they actively try to prevent any “unsanctioned” implementation to use their platforms.

That’s why it is generally complex and time-consuming to maintain a bridge. They will actively fight you and you will need to update your code again and again. Another example would be NewPipe and Youtube on Android.

In summary

One can imagine the bare minimum will be done to comply with the law – after a horde of lobbyists has gone over it again and again, to weaken it even more.

Forcing interoperability is only a question of form, and not of substance. The problem still is capitalism, accumulation of wealth and power, and monopolies and oppressions that these create.

It’s certainly pessimistic, but I doubt forcing these monopolies to communicate with other entities allow the free XMPP community to oppose their ideas and provoke substancial changes within these services, and I’m picturing the opposite rather.

Down with capitalism. Down with oppressions.

  1. TODO: expand on this in another article ↩︎

  2. The phrase “good practices” is to be defined obviously, by a collective discussion between equals, not out of a unilateral decision. ↩︎

by pep. ( at April 04, 2022 11:00

March 30, 2022


ejabberd 21.12

This new ejabberd 21.12 release comes after five months of work, contains more than one hundred changes, many of them are major improvements or features, and several bug fixes.

ejabberd 21.12 released

When upgrading from previous versions, please notice: there’s a change in mod_register_web behaviour, and PosgreSQL database, please take a look if they affect your installation.

A more detailed explanation of those topics:

Optimized MucSub and Multicast processing

More efficient processing of MucSub and Multicast (XEP-0033) messages addressed to big number of addresses.

Support MUC Hats

MUC Hats (XEP-0317) defines a more extensible model for roles and affiliations in Multi-User Chat rooms. This protocol was deferred, but it is supported by several clients and servers. ejabberd’s implementation supports both the XEP schema, and also the Conversejs/Prosody custom schema.

New mod_conversejs

This module serves a simple page to allow the Converse.js XMPP web browser client connect to ejabberd. It can use ejabberd’s Websockets or BOSH (HTTP-Bind).

By default this module points to the public online client available at converse.js. Alternatively, you can download the client and host it locally with a configuration like this:

  - localhost

    port: 5280
    ip: "::"
    module: ejabberd_http
    tls: false
      /websocket: ejabberd_http_ws
      /conversejs: mod_conversejs
      /conversejs_files: mod_http_fileserver

    websocket_url: "ws://localhost:5280/websocket"
    conversejs_script: "http://localhost:5280/conversejs_files/converse.min.js"
    conversejs_css: "http://localhost:5280/conversejs_files/converse.min.css"
    docroot: "/home/ejabberd/conversejs-9.0.0/package/dist"
    accesslog: "/var/log/ejabberd/fileserver-access.log"

Many PubSub improvements

Add delete_old_pubsub_items command.
Add a command for keeping only the specified number of items on each node and removing all older items. This might be especially useful if nodes may be configured to have no ‘max_items’ limit.

Add delete_expired_pubsub_items command
Support XEP-0060’s pubsub#item_expire feature by adding a command for deleting expired PubSub items.

Fix get_max_items_node/1 specification
Make it explicit that the get_max_items_node/1 function returns ?MAXITEMS if the ‘max_items_node’ option isn’t specified. The function didn’t actually fall back to ‘undefined’ (but to the ‘max_items_node’ default; i.e., ?MAXITEMS) anyway. This change just clarifies the behavior and adjusts the function specification accordingly.

Improvements in the ejabberd Documentation web

Added many cross-links between modules, options, and specific sections.

Added a new API Tags page similar to “ejabberdctl help tags”.

Improved the API Reference page, so commands show the tags and the definer module.

Configuration changes

mod_register_web is now affected by the restrictions that you configure in mod_register (#3688).

mod_register gets a new option, allow_modules, to restrict what modules can register new accounts. This is useful if you want to allow only registration using mod_register_web, for example.

PosgreSQL changes

Added to PgSQL’s new schema missing SQL migration for table push_session (#3656)

Fixed in PgSQL’s new schema the vcard_search definition (#3695).
How to update an existing database:

ALTER TABLE vcard_search DROP CONSTRAINT vcard_search_pkey;
ALTER TABLE vcard_search ADD PRIMARY KEY (server_host, lusername);

Summary of changes:


  • create_room_with_opts: Fixed when using SQL storage (#3700)

  • change_room_option: Add missing fields from config inside mod_muc_admin:change_options

  • piefxis: Fixed arguments of all commands


  • mod_caps: Don’t forget caps on XEP-0198 resumption

  • mod_conversejs: New module to serve a simple page for Converse.js

  • mod_http_upload_quota: Avoid ‘max_days’ race

  • mod_muc: Support MUC hats (XEP-0317, conversejs/prosody compatible)

  • mod_muc: Optimize MucSub processing

  • mod_muc: Fix exception in mucsub {un}subscription events multicast handler

  • mod_multicast: Improve and optimize multicast routing code

  • mod_offline: Allow storing non-composing x:events in offline

  • mod_ping: Send ping from server, not bare user JID (#3658)

  • mod_push: Fix handling of MUC/Sub messages (#3651)

  • mod_register: New allow_modules option to restrict registration modules

  • mod_register_web: Handle unknown host gracefully

  • mod_register_web: Use mod_register configured restrictions (#3688)


  • Add delete_expired_pubsub_items command

  • Add delete_old_pubsub_items command

  • Optimize publishing on large nodes (SQL)

  • Support unlimited number of items

  • Support ‘max_items=max’ node configuration (#3666)

  • Bump default value for ‘max_items’ limit from 10 to 1000 (#3652)

  • Use configured ‘max_items’ by default

  • node_flat: Avoid catch-all clauses for RSM

  • node_flat_sql: Avoid catch-all clauses for RSM


  • Use INSERT … ON CONFLICT in SQL_UPSERT for PostgreSQL >= 9.5

  • mod_mam export: assign MUC entries to the MUC service (#3680)

  • MySQL: Fix typo when creating index (#3654)

  • PgSQL: Add SASL auth support, PostgreSQL 14 (#3691)

  • PgSQL: Add missing SQL migration for table push_session (#3656)

  • PgSQL: Fix vcard_search definition in pgsql new schema (#3695)


  • “sort -R” command not POSIX, added “shuf” and “cat” as fallback (#3660)

  • Make s2s connection table cleanup more robust

  • Update export/import of scram password to XEP-0227 1.1 (#3676)

  • Update Jose to 1.11.1 (the last in correctly versioned)

ejabberd 21.12 download & feedback

As usual, the release is tagged in the Git source code repository on Github.

The source package and binary installers are available at ejabberd XMPP & MQTT server download page.

If you suspect that you’ve found a bug, please search or fill a bug report on Github.

The post ejabberd 21.12 first appeared on ProcessOne.

by Jérôme Sautret at March 30, 2022 09:33

March 29, 2022


Newsletter: Cheogram Android Release, Matrix Alpha

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

This month marks the first release of the Cheogram Android app we have been sponsoring.  This app is a fork of the excellent Conversations, and will stay close to upstream going forward.  Some of the improvements relevant to JMP users include:

  • Add contacts without typing
  • Integrate with the native Android Phone app (optional)
  • Address book integration
  • Messages with both media and text, including animated media
  • Unobtrusive display of subject lines, where present (such as on voicemails)
  • Links to known contacts are shown with their name (improves group text UX)
  • Show timestamps for calls
  • Missed call notifications

All of these features have been built in a standards-compliant way and do not rely on anything particular to Cheogram or JMP at all, so they could be reused with other gateways as well.  You can also get the app from F-Droid.

In other news, we’ve heard for some time that some users want to try using JMP from Matrix.  Since we are so big on bidirectional gateways, we have decided to add support for signing up with JMP using a Matrix ID.  This should be considered an alpha test at this time, and most notably voice does not work with the gateway yet so you will need to use SIP (or forwarding) for voice if you use a Matrix ID.  SMS, MMS, and Voicemail will all be delivered to Matrix just as they are to Jabber.  For this we are using the excellent gateway instance at  To get started, just head to, choose a phone number, and select “I am a Matrix user” on the next page.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

Art in screenshots is from Pepper & Carrot by David Revoy, CC-BY. Artwork has been modified to crop out sections for avatars and photos, and in some cases add transparency. Use of this artwork does not imply endorsement of this project by the artist.

by Stephen Paul Weber at March 29, 2022 03:00

March 24, 2022


Profanity and OpenPGP for XMPP (OX)

We have been to implement OX in profanity. OX is XEP-0374: OpenPGP for XMPP Instant Messaging which may replace XEP-0027: Current Jabber OpenPGP Usage.

It is part of Profanity since version 0.10 but got some fixes since then.

Feel free to try and test the implementation. Let us know, if you have some issues and support the development via testing and reporting bugs.

How does it works? There are some parts which will be done directly with GnuPG. You will see those gpg commands which needs to be executed in the shell. The commands within profanity are the /ox commands.

Generate OpenPGP key materials

The first step is to create a OpenPGP key pair. The key pair generation will be done with the gpg command of GnuPG.

gpg --quick-generate-key xmpp:alice@domain.tld future-default default 3y

This command will generated a OpenPGP key with a UID xmpp:alice@domain.tld. The option future-default has been used to generate a ed25519/cv25519 key. The expiration date will be in three years. Replace the Jabber ID with your JID and do not forget the URI xmpp: prefix.

pub   ed25519 2021-09-21 [SC] [verfällt: 2024-09-20]
uid                      xmpp:alice@domain.tld
sub   cv25519 2021-09-21 [E]

Export your public key

You need to export your public key to share this public key with your buddy. Use the command below to export public key:

gpg --export \
  --export-options export-minimal \
  --export-filter 'keep-uid=uid =~ xmpp:alice@domain.tld' \
  --export-filter 'drop-subkey=usage =~ a' \
  583BAE703A801095B6B71A56BD801174B1A0B84A \
  > /tmp/pep-key.gpg

The key will be exported to /tmp/pep-key.gpg. You may check the key with the command below:

gpg --show-key --with-sig-list /tmp/pep-key.gpg

Keep in mind: Public keys may have some information (signatures, name, e-mail address). Be careful which data will be exported. The export-options and export-filter option of GnuPG will help you to filter the data.

Publish your key

You can use profanity to publish your exported key into your account (PEP). The /ox announce command will publish your key.

/ox announce /tmp/pep-key.gpg
Annonuce OpenPGP Key for OX /tmp/pep-key.gpg ... 

The command will create two PEP node records to store the key.

Discover keys

The /ox discover command will be used to discover keys.

/ox discover buddy@domain.tld
Discovering Public Key for buddy@domain.tld 

To request and import a key, you can use the /ox request command.

/ox request buddy@domain.tld 1234567890ABCDEF1234567890ABCDEF12345678
Requesting Public Key 1234567890ABCDEF1234567890ABCDEF12345678 for buddy@domain.tld
Public Key imported 

The key will be imported into your gnupg keyring.

Sign the imported key

The key can been shown via gpg gpg -k xmpp:buddy@domain.tld. Make sure the key is the key of your buddy and sign the key with your key.

gpg --ask-cert-level --default-key 583BAE703A801095B6B71A56BD801174B1A0B84A --sign-key 1234567890ABCDEF1234567890ABCDEF12345678

The command /ox contacts will show the keys with XMPP-UID. The command /ox keys will show all known OpenPGP keys.

Use OX

Within a chat window you can start OX via /ox start and stop it via /ox end.

Messages will be send signed and encrypted.

March 24, 2022 13:07