Planet Jabber

October 20, 2021


Newsletter: Action required for SIP accounts, new inbound call features, and more!

Hi everyone!

Welcome to the latest edition of your pseudo-monthly update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client. Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

The biggest announcement this month is the launch of our new inbound voice and SIP account system! Due to changes at our major carrier partner, all inbound call handling had to be rewritten and the SIP accounts some people use are moving to a new server with a new server name. As part of this rewrite you can now use the configure calls command to set call forwarding to any XMPP, SIP, or tel URI without involving support. If you haven’t used the JMP bot before, you do so by sending a message to your contact with the text of the command you want to run. You can send help for a list.

If you have not tried it yet, now would be a great time to try our features allowing calling from your Jabber account. All your regular SMS contacts can be called as well with no changes on your part, from any client that supports voice calls. Inbound calls can be routed to your Jabber ID using the configure calls command.

If you still need a SIP account for some reason (such as to use with a device that does not support Jabber calls) you will need to use the reset sip account command to get a username and password on the new server, as the old server will be going away soon. Be sure to use UDP as the transport!

In other news, our founder Denver Gingerich (ossguy) has returned from his leave and is rejoining us in day-to-day operations. You will see him more often in the chatroom and sometimes answering support.

There has also been a bit of movement on the mobile app front. We have been partially sponsoring development work on the now-released Snikket iOS which is now our recommended client for all iOS users. When paired with a Snikket server this client should receive calls and messages reliably, and also supports DTMF (entering digits for phone menus) during calls.

We’ve also had a volunteer working with us to clean up some of the features in our prototype app for Android. Not many visible changes yet (except for a much better icon to open the DTMF pad) but watch this space for updates.

As always, if you have any questions, feel free to reply to this email or find us in the group chat per below. We’re happy to chat whenever we’re available!

To learn what’s happening with JMP between emails like this, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at October 20, 2021 02:00

October 17, 2021

Peter Saint-Andre

There's No Such Thing as a Kudo

It always warms my heart when we import a word directly from ancient Greek into English. Often they are philosophical locutions, such as eudaimonia and ataraxia. Yet at times more mundane terms make the leap; these days perhaps the most common one is kudos (e.g., "kudos to you on aceing that algebra test!"). Consistent with modern English usage, people tend to pronounce it "koo-doze" and think of it as a plural ("that algebra test was really hard so you deserve many kudos for aceing it"). However, in ancient Greek κῦδος was pronounced "koo-doss" and was a singular noun (meaning fame, honor, renown). Just as we give praise (not "a praise") to a friend or colleague, so an ancient Greek might have given κῦδος. I'm sorry to disappoint you, but as a result there is no such thing as a kudo....

October 17, 2021 00:00

October 10, 2021

Paul Schaub

A Simple OpenPGP API

In this post I want to share how easy it is to use OpenPGP using the Stateless OpenPGP Protocol (SOP).

I talked about the SOP specification and its purpose and benefits already in past blog posts. This time I want to give some in-depth examples of how the API can be used in your application.

There are SOP API implementations available in different languages like Java and Rust. They have in common, that they are based around the Stateless OpenPGP Command Line Specification, so they are very similar in form and function.

For Java-based systems, the SOP API was defined in the sop-java library. This module merely contains interface definitions. It is up to the user to choose a library that provides an implementation for those interfaces. Currently the only known implementation is pgpainless-sop based on PGPainless.

The single entry point to the SOP API is the SOP interface (obviously). It provides methods for OpenPGP actions. All we need to get started is an instantiation of this interface:

// This is an ideal candidate for a dependency injection framework!
SOP sop = new SOPImpl(); // provided by pgpainless-sop

Let’s start by generating a secret key for the user Alice:

byte[] key = sop.generateKey()
        .userId("Alice <>")

The resulting byte array now contains our OpenPGP secret key. Next, lets extract the public key certificate, so that we can share it with out contacts.

// public key
byte[] cert = sop.extractCert()
        .key(key) // secret key

There we go! Both byte arrays contain the key material in ASCII armored form (which we could disable by calling .noArmor()), so we can simply share the certificate with our contacts.

Let’s actually create an encrypted, signed message. We obviously need our secret key from above, as well as the certificate of our contact Bob.

// get bobs certificate
byte[] bobsCert = ...

byte[] message = "Hello, World!\n".getBytes(StandardCharsets.UTF_8);

byte[] encryptedAndSigned = sop.encrypt()
        .signWith(key) // sign with our key
        .withCert(cert) // encrypt for us, so that we too can decrypt
        .withCert(bobsCert) // encrypt for Bob

Again, by default this message is ASCII armored, so we can simply share it as a String with Bob.

We can decrypt and verify Bobs reply like this:

// Bobs answer
byte[] bobsEncryptedSignedReply = ...

ByteArrayAndResult<DecryptionResult> decrypted = sop.decrypt()
        .verifyWithCert(bobsCert) // verify bobs signature
        .withKey(key) // decrypt with our key

// Bobs plaintext reply
byte[] message = decrypted.getBytes();
// List of signature verifications
List<Verification> verifications = decrypted.getResult().getVerifications();

Easy! Signing messages and verifying signed-only messages basically works the same, so I’ll omit examples for it in this post.

As you can see, performing basic OpenPGP operations using the Stateless OpenPGP Protocol is as simple as it gets. And the best part is that the API is defined as an interface, so swapping the backend can simply be done by replacing the SOP object with an implementation from another library. All API usages stay the same.

I want to use this opportunity to encourage YOU the reader: If there is no SOP API available for your language of choice, consider creating one! Take a look at the specification and an API definition like sop-java or the sop Rust crate to get an idea of how to design the API. If you keep your SOP API independent from any backend library it will be easy to swap backends out for another library later in the process.

Let’s make OpenPGP great again! Happy Hacking!

by vanitasvitae at October 10, 2021 15:45


Gajim 1.3.3

This release features improved Ad-Hoc Commands and brings back spell checking. Gajim 1.3.3 includes many bug fixes and improvements. Thanks everyone for reporting issues!

What’s New

The Ad-Hoc Commands window has been ported to Gajim’s new Assistant. This unifies the look and feel with other actions using an Assistant and it also fixes some issues.

Windows users please note: Windows builds are now based on Python 3.9, which does not run on Windows 7 or older.

More Changes


  • Profile: A NOTE entry has been added


  • API JID for integration has been updated
  • Provider list: has been removed (service is gone)


  • #10441 Reload CSS after switching dark/light theme
  • #10477 Migration routine for portable installer
  • #10540 Windows: Added GSSAPI dependency
  • Fixed starting History Manager in standalone mode

Have a look at the changelog for the complete list.

Known Issues

  • Zeroconf (serverless messaging) has not been re-implemented yet
  • Client certificate setup is not possible yet


As always, don’t hesitate to contact us at or open an issue on our Gitlab.

October 10, 2021 00:00

October 05, 2021

The XMPP Standards Foundation

The XMPP Newsletter September 2021

Welcome to the XMPP Newsletter covering the month of September 2021.

Many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, especially throughout the current situation, please consider to say thanks or help these projects!

Read this Newsletter via our RSS Feed!

Interested in supporting the Newsletter team? Read more at the bottom.

Other than that - enjoy reading!

Newsletter translations

Translations of the XMPP Newsletter will be released here (with some delay):

Many thanks to the translators and their work! This is a great help to spread the news! Please join them in their work or start over with another language!

XSF Announcements

The XSF offers fiscal hosting for XMPP projects now! Please apply via Open Collective. For more information, see the announcement blog post.

The XSF is planning to participate the Google Summer of Code 2022 (GSoC). If you are interested in participating as a student, mentor or as project in general please add your ideas and reach out to us!

Furthermore, the website received an update. It’s now built using Hugo (instead of Pelican) which reduces maintenance effort significantly. The new website is based on Bootstrap 5, and has been developed with simplicity in mind. We also made sure to make contributions as easy as possible. Building the website locally requires a minimum of dependencies, and is possible via Docker and Vagrant as well.


XMPP Office Hours - Also, checkout our new YouTube channel!

Berlin XMPP Meetup (remote): Monthly Meeting of XMPP Enthusiasts in Berlin - always 2nd Wednesday of the month.


OpenPGP for XMPP (OX) is slowly getting client implementations. In a German blog post, DebXWoody walks us step by step through the process of enabling OX and using it in Profanity.

The Libervia ActivityPub Gateway work continues, with a report about Full-Text Search for PubSub cache and an early, but functional, ActivityPub XMPP Component.


Matthew Wild has published a web utility for exploring XEP-0392 “Consistent Color Generation”. This XEP advises clients on how to colourize a user’s contacts (e.g. their nicknames or default avatars) for easier visual identification. The XEP describes a standard algorithm that aims to provide a distinctive colour for any contact, with considerations for colour vision deficiencies, and allowing all of a user’s clients to display the same colour for a given contact. Check out the XEP-0392 colour explorer and the Modern XMPP colour guidance.

Ever wanted a web clients comparison between XMPP and Matrix? You’re in luck as Ade Malsasa Akbar has written a simple overview of two group chat messengers from the decentralization family, Element of Matrix and Movim of XMPP. This is a discussion of usability from an end user perspective without talking about technology stuff like security or protocols.

Software news

Clients and applications

Dino v0.2.2 has been released. This version is a maintenance release and includes bug fixes.

UWPX v. and v. have been released. v. finally added push support with the push server developed by COM8. v. of UWPX fixes a bunch of bugs and updated the UI to WinUI 2.7. Besides that, a new OMEMO status indicator got introduced which should help you check if your contacts support the latest OMEMO standard.

XMPP-DNS, a tool to look up XMPP SRV records and test connectivity, had its initial release of v0.1.0. The release was directly followed by v0.2.0, bringing support for the XMPP-server SRV records and a small bugfix release v0.2.1.

Gajim development news: September brought many updates under the hood. With big changes coming up in Gajim 1.4, many parts of the code have to be touched. These changes remain mostly invisible for users, but make Gajim more robust. In some cases, this results in visible improvements as well: Both Add Contact and Start Chat windows are now detecting the type of chat behind an address.

Go-sendxmpp, one of various alternatives to the original sendxmpp, released versions v0.1.0 and v0.1.1.

Conversations and Quicksy got version 2.10.0 out this month, with a short changelog: black bars on video calls (so you know when “you’re holding it wrong”), search performance improvements and a new setting to block app screenshots. Under the hood there was more: two bugs fixed for file attachments (specially for users with a lot of media files), touching the titlebar will open chat details and nested quotes (not yet the default, but you can “copy” and then “paste as quote” to use them).

Converse is going forward after a lot of development. Version 8 of this JavaScript XMPP chat client that runs in your browser was released. JC Brand’s blog post covers the visible changes (message styling, OMEMO encrypted files, URL previews) but also the internal changes (IndexDB by default, web components). 8.0.1 followed shortly with bug fixes to the polished product.


Profanity 0.11.1 has been released improving upon themes, notifications and OMEMO handling.

The Mellium Dev Communiqué for September has been published. It includes minor updates to the Communiqué TUI client as well as the library. Full details in Dev Communiqué for September 2021 on their Open Collective page.


No news on XMPP servers have reached us this month. :-(


Mellium has released v0.20.0 of their Go XMPP library. The release announcement can be found on Open Collective. Some of the bigger features include group chat (MUC), chat history (MAM), and ad-hoc command support!

Extensions and specifications

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).


The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs proposed this month.


  • No new XEPs this month.


If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.


  • Version 0.8.0 of XEP-0384 (OMEMO Encryption)

    • Update to XEP-0420 version 0.4.0 and adjust namespace
    • Replace SCE’s old ‘content’ element by its new ‘envelope’ element
    • Replace SCE’s old ‘payload’ element by its new ‘content’ element
    • Update SCE’s namespace to ‘urn:xmpp:sce:1’
    • Update namespace to ‘urn:xmpp:omemo:2’ (melvo)
  • Version 0.14.0 of XEP-0280 (Message Carbons)

    • Incorporate LC feedback: Remove requirement to remove “private” elements (and add interop note), completely reword mobile considerations to fit modern reality. (gl)
  • Version 1.1 of XEP-0227 (Portable Import/Export Format for XMPP-IM Servers)

    • Discourage use of ‘password’, provide a way to include SCRAM credentials, PEP nodes and message archives. (mw)
  • Version 1.22.0 of XEP-0060 (Publish-Subscribe)

    • Remove exception for last item when purging a node: all items must be removed. (jp)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Draft.

Stable (formerly known as Draft)

Info: The XSF has decided to rename ‘Draft’ to ‘Stable’. Read more about it here.

  • No Stable this month.

Call for Experience

A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

  • No Call for Experience this month.

Thanks all!

This XMPP Newsletter is produced collaboratively by the XMPP community.

Therefore many thanks to Adrien Bourmault (neox), Benoît Sibaud, emus, palm123, Licaon_Kter, MattJ, mdosch, nicola, seveso, Sam Whited, SouL, wurstsalat3000, Ysabeau for their support and help in creation, review and translation!

Spread the news!

Please share the news via other networks:

Find and place job offers in the XMPP job board.

Also check out our RSS Feed!

Help us to build the newsletter

We started drafting in this simple pad in parallel to our efforts in the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. We really need more support!

You have a project and write about it? Please consider sharing your news or events here, and promote it to a large audience! And even if you can only spend a few minutes of support, these would already be helpful!

Tasks which need to be done on a regular basis are for example:

  • Aggregation of news in the XMPP universe
  • Short formulation of news and events
  • Summary of the monthly communication on extensions (XEP)
  • Review of the newsletter draft
  • Preparation for media images
  • Translations: especially German and Spanish


This newsletter is published under CC BY-SA license.

October 05, 2021 00:00

September 29, 2021


Development News September 2021

September brought many updates under the hood. With big changes coming up in Gajim 1.4, many parts of the code have to be touched. These changes remain mostly invisible for users, but make Gajim more robust. In some cases, this results in visible improvements as well: Both Add Contact and Start Chat windows are now detecting the type of chat behind an address.

Changes in Gajim

Since development on Gajim 1.4 started, a lot has changed under the hood. Window management and contacts interface both received a complete makeover. These are essential components, which means almost every part of Gajim has to be adapted. This is also an opportunity to clean up old code and to revise features.

Jingle File Transfer for example received a new resource selector widget, which allows users to select a resource/device to send the file to. But this is just one of many features which needed to be updated.

Surprisingly often there have been issue reports about joining group chats. It turns out these are a result of Gajim’s Start Chat window offering two actions for new addresses: either start a chat or join a group chat. Choosing the first action for group chats results in a mess. In order to fix this, Gajim will now try to do some discovery magic before actually starting a chat. The same goes for the new Add Contact window, which will now detect group chats and gateways. If a gateway (i.e. IRC) is detected, Gajim will offer registering options or Ad-Hoc Commands to configure the gateway, depending on its capabilities.

Plugin updates

No plugin updates this month.

Changes in python-nbxmpp

Parsing XEP-0050 Ad-Hoc Commands is now more robust against unknown or duplicated actions.

Furthermore, an issue with message corrections has been fixed.

As always, feel free to join to discuss with us.


September 29, 2021 00:00

September 28, 2021

Jérôme Poisson

Libervia progress note 2021-W38


it's time for a new progress note. The work is currently focused on ActivityPub Gateway, and progress has been done on pubsub cache search and the base component.

Pubsub Cache Full-Text Search

Next to the pubsub cache implementation, it was necessary to have a good way to search among items.

So far, Libervia was doing pubsub search using pubsub service's capabilities, and notably the XEP-0431(Full Text Search in MAM) implementation. This is working well (it's what is currently used on this very blog when you do use the search box), but has some pitfalls: the pubsub service must implement this XEP (and as far as I know, Libervia Pubsub is the only one which does it at the moment), the search can be done in a single node at a time only, each search request imply a new XMPP request to the pubsub service, and pubsub items must be in plain text (which is currently always the case, but pubsub end-to-end encryption is planned as second part of the granted NLNet project on which I'm working).

In regard to that, a local search is necessary. SQLAlchemy doesn't really have Full-Text Search (or FTS) support for SQLite out of the box, but it allows to use any SQL directly, thus I could use the really nice FTS engine available within it (FTS5). This is an extension, but in practice it is already installed most of the time (it is part of the SQLite amalgamation).

Thanks to the JSON support in SQLite, it is also possible to filter search requests on parsed data. That's really useful for features like blogs where you often want to do that (e.g. filtering on tags).

The cache search can be operated on all data in cache, that means that you can do search on items coming from multiple nodes and even multiple services. That opens the door to features like hashtags or blog suggestions.

Last but not least, search requests can be ordered by any parsed field. In other terms it will be possible to order a blog by declared publication date — which may be important if you want to import a blog —, or events by location.

To have an idea of the possibilities, you can check the documentation of the CLI search command.

Base ActivityPub Component

Once the preparatory steps have been done, the ActivityPub component itself could be started. In short, for people not used to XMPP, a "component" is a kind of generic plugin to server. You declare it in your server configuration, choose a JID and a "shared secret" (a password), run it with those parameters, and voilà.

For the AP gateway, Libervia runs the component. There is documentation to explain how to launch it, don't worry it's simple.

As I've got questions about this, here is a small schema giving an overview on how the whole thing is working:

global overview of Libervia ActivityPub Gateway

I hope that it makes the whole thing more clear, otherwise don't hesitate to ask me for clarification.

As you can see, the gateway includes an HTTP server to communicate with AP software, but in many cases there will already be an HTTP server (website, XMPP web client, etc.). In this case, you'll have to redirect /.well-known/webfinger and /_ap requests to the gateway server.

For the development, I'm using Prosody as reference XMPP server implementation, and Mastodon as reference ActivityPub server implementation. I've set a local Mastodon installation, and I've chosen to use Docker for that, as it makes things easy to have a reproducible environment and to save and restore a specific state. It was not as trivial as I would expect to find the right configuration to use, I've found outdated tutorials, but I could manage to run the thing relatively easily.

Because we work with HTTPS, I've made a custom docker image with locale certification authority, so Mastodon could validate my gateway HTTP server certificate. I'm already doing that for docker image used for end-to-end tests of Libervia, nothing difficult. Surprisingly though, Mastodon could not resolve my instance, when HTTPie running from the same container could do it flawlessly. I've quickly realised that Mastodon was not respecting hosts declared in /etc/hosts (and added via extra_hosts in Compose file) and found a relevant bug report on Mastodon tracker. That was annoying, and I had to find a way to work around that. I've done it by running a local DNS Server, and Twisted offers a nice built-in one. Twisted DNS can easily use /etc/hosts to direct my local domains to my local IP, it's just a one liner such as twistd3 -n dns --hosts-file=/etc/hosts -r.

After that the domain was resolving, but to my surprise, Mastodon was still not able to communicate with my gateway, and even more bizarre my server was receiving no request at all. After a quick round of tcpdump/wireshark, I saw that indeed nothing was sent to my server.

Thanks to the Libre nature of Mastodon, I could resolve this by reading the source code, the Mastodon::HostValidationError
led me to a section that made the whole picture clear: my server is on a local IP and Mastodon by default refuses to reach it (to avoid the confused deputy attack). With the ALLOWED_PRIVATE_ADDRESSES setting I could finally make Mastodon communicate with my server.

The How to implement a basic ActivityPub server tutorial made by Eugen Rochko (Mastodon original developer) is a nice article to start an ActivityPub implementation, it has been useful to build the base component (despite being a bit outdated, notably regarding signature).

I have to rant a bit, though, as the ActivityPub specification are not available in EPUB or PDF, making it difficult to read on an e-book reader. I could overcome that thanks to pandoc (git clone then pandoc index.html --pdf-engine=xelatex -o activitypub.pdf), it's really more comfortable to keep the reference like this.

So the base component is now available but only usable by developers (and only capable of sending message to ActivityPub for now). Things will be really exiting with the next 2 steps, as bidirectional communications will be available, and the gateway will be usable for early adopters. I don't expect those steps to be really long.

test message sent with Libervia AP Gateway

Oh, and to answer another question that I've had, yes you can use the same ActivityPub actor identifier as your XMPP JID. I'll explain next time how everything is accessed.

That's all for today.

by goffi at September 28, 2021 06:57

September 27, 2021

Erlang Solutions

5 Erlang and Elixir Use Cases In FinTech 2/2

We talked in our recent blog post about some of the success stories of FinTechs and banks leveraging the Erlang, Elixir and the BEAM virtual machine – including Vocalink, Goldman Sachs and others. In this post let’s examine a further 5 interesting use cases spanning building a bank from scratch in Elixir to using the most deployed open source message broker in the world (built in Erlang) in one of the world’s largest financial data companies.


Why Erlang? 

To power the core system

Klarna is the uber-successful European FinTech unicorn that is going from strength to strength in the BNPL space. They operate as an intermediary between customer and agent making it simpler for both to buy – settlement takes place later using any of various payment methods.  Klarna’s main payment system has been running for over 10 years, serving millions of customers.

Originally a monolith entirely built in Erlang, Klarna has since moved out to some different services, with a technology stack of Erlang, Scala, Clojure and Haskell, combined with a serverless architecture.

Where Erlang has been key in this success story, is that it has enabled the core Klarna system to demonstrate extremely high availability over the years with zero downtime. Erlang has allowed the flexibility to grow massively and restructure the system without having to stop operating. 


Why Erlang/Elixir?  

For innovative backend services

Kivra exists to develop sustainable and convenient solutions for everyday life. They do this by ensuring the secure, reliable delivery of digital financial information that previously would have been sent in paper form by traditional mail.

They are a fast-growing and ambitious FinTech scaleup operating in Sweden and Finland enabling over 37,000 companies, public authorities and organisations to service 5 million users (including half of the adult Swedish population) with over 200 million important digital documents every year. 

A key part of Kivra’s product offering is their Sender Platform, where Kivra’s B2B clients send important content and communications to end-users. Erlang Solutions have worked in close collaboration with Kivra in a fully remote team consisting of backend developers, frontend developers and UX specialists. We provided domain expertise in Erlang/Elixir technologies as well as contributing with advanced modern design and development practices such as design thinking and pair/team programming to deliver the solutions quickly in an iterative and incremental way. 


Why RabbitMQ?

To help scale financial applications.

Bloomberg is one of the largest private networks in the world. They provide current financial data (around 120 billion pieces at a peak of more than 10 million messages per second) to leaders and decision-makers worldwide at very low latency. 

They make extensive use of middleware, including queues and use RabbitMQ (built in Erlang) for hundreds of teams at Bloomberg. Their model of messaging middleware as a service frees up application developers time to be used on other tasks and achieve scalability, flexibility, and maintainability, without needing to focus on the RabbitMQ Server details. 

OTP Bank

Why Erlang?

To build modern, digital banking infrastructure

OTP Bank has 13 million customers in central & eastern Europe and has worked closely with us on ambitious projects to disrupt and modernise their model to great success. They were refactoring their complete legacy IT backend system to secure a future-proof full banking infrastructure for secure, reliable, scalable real-time transactions – Erlang was identified as the right tool for the job.

Erlang Solutions were engaged to help with speeding up the delivery of innovative services to their customers by implementing LuErl, a new technology created by Erlang Solution’s engineer and co-creator of Erlang, Robert Virding, to shorten the customer feedback cycle from weeks to hours. The Erlang based immediate payment system was already using elements of the new design and has achieved zero downtime.

Memo Bank

Why Elixir?

To build a scalable reliable banking core system

Memo Bank is the first independent bank to be created in France in the last fifty years. It was founded in 2017 and serves the European small and medium businesses (SMB) market, helping them to manage cash flows and fund their growth as a bank. They provide all the services you’d expect from a business bank, from current accounts to credit lines. 

The Memo team chose Elixir as the right tool for the job as building a system that is available anytime, from any device was mission-critical for success. They also have identified Elixir systems’ scalability and availability as necessary to absorb real-time transactions reliably. Read more about Memo Bank’s story on our blog.


So there you go. These success stories, along with those mentioned in part 1 of this blog series, provide a compelling argument for leveraging the BEAM VM, Erlang and Elixir for all manner of development projects and use cases within FinTech, and there are many more too which we cannot detail due to NDAs.

If you have development needs in the industry then we remain willing and available to offer expert consultancy to help you get your FinTech products to market faster while using fewer resources. You can contact us to speak with one of our expert consultants at any time across time zones.

The post 5 Erlang and Elixir Use Cases In FinTech 2/2 appeared first on Erlang Solutions.

by Michael Jaiyeola at September 27, 2021 08:32

September 26, 2021

The XMPP Standards Foundation

The XSF as a Fiscal Host

Managing funds is easy when you’re a large project owned by an incorporated entity with accountants at your disposal, or when you’re a small project run by one person who accepts and uses all donations. When you’re in between, however, it can be difficult to handle. If you’re a project with a few regular contributors but no bank account, who handles the money? For many projects the answer is a fiscal host.

Fiscal hosts accept donations on behalf of a smaller organization and then earmark the donations for that smaller groups use. This way the larger organization handles taxes and accounting and the smaller group doesn’t have to incorporate or pay lots of money for accountants.

Many fiscal hosts exist for software projects, some focused on particular areas of the software world that they want to see developed. Many general software focused hosts such as the Software Freedom Conservancy and the Open Source Collective have requirements that put them out of the reach of most small XMPP related projects. Until now no fiscal host has existed specifically to nurture and grow new XMPP related projects.

Today the XSF is announcing that it will change this by acting as a fiscal host for XMPP related projects. A new organization has been created on Open Collective and will be using their platform to accept donations on behalf of hosted projects. Funds are currently handled in USD (since the XSF is based in the U.S.) but projects from all over the world are welcome to apply! We can’t wait to see what small XMPP projects are able to do once they are given the tools they need to raise money under the umbrella of a 501(c)(3) non-profit organization like the XSF!

Screenshot of the XSF Open Collective page showing an Actions mnenu with “Apply” visible at the bottom.

The new terms of fiscal sponsorship can be found on the website, and you can apply for sponsorship by creating a collective for your project on Open Collective, then navigating to the XMPP organization and clicking “Apply”. This will present you with a form where you can enter the required information:

Screenshot of the application form that allows you to select an account to apply from as well as enter information about the project applying.

For more information about fiscal hosts, see the Open Collective Fiscal Hosts FAQ.

If you run an XMPP related open source project or organization and think you could benefit from a fiscal host to help you manage and distribute funds, consider applying!

September 26, 2021 00:00

September 24, 2021

Peter Saint-Andre

Opinions Weak and Strong

Continuing a thread that I started to explore earlier this year, I'd like to take a closer look at the intensity of opinions. Here as almost everywhere, there is a continuum: we all have opinions we hold strongly and opinions we hold weakly. Not only do the specific contents of these buckets change over time, but in general the intensity of one's opinions can change over time, too. We're all familiar with the sophomoric young adult who has strong opinions about everything (yes, I resembled that remark). Such an individual can be contrasted with the more mature person, who understands what truly matters in life and doesn't hold strong opinions about matters that are less important or positively unimportant....

September 24, 2021 00:00

September 17, 2021

Erlang Solutions

FinTech Matters newsletter | September 2021

Subscribe to receive FinTech Matters and other great content, notifications of events and more to your inbox, we will only send you relevant, high-quality content and you can unsubscribe at any time.

Read on to discover what really matters for tech in financial services right now for the Erlang ecosystem and beyond.

It’s back to school season following what was a disrupted summer for most, but one where the FinTech world has continued to innovate and grow – global investment in H1 reached $98bn (£18bn in the UK) and Revolut raised a funding round of $800m at a valuation of $33bn. 

Michael Jaiyeola, FinTech Marketing Lead

[Subscribe now]

The Top Stories Right Now

Study To Investigate The Impact Of Open Source Software On The EU Economy

This detailed report from the EU examines the technological independence, competitiveness and innovation around open source software. The main breakthrough of the study is described as being the ‘identification of open source as a public good’. The value of open source technologies for many modern industries is well recognised but financial services have lagged behind somewhat. It is true that in highly regulated industries the compliance requirements may require some extra work when it comes to open source, but the idea that to be successful you must build using proprietary technology is finally being dispelled. A report at the beginning of the year by forecast The Open Source Service Market to grow by 24 % by 2025. 

Communities like those of Erlang and Elixir offer collaboration and information sharing to raise standards for all – it’s not about free software. Where financial services infrastructure leverages open source technology that meets shared requirements then individual companies can focus on adding differential value to their products and services. For FinTech faster innovations while driving down development costs and speed to market is the holy trinity and open source technology enables this in the right use cases where being agile, responsive and scalable will determine competitiveness and success. 

Get the report

Solarisbank raises $224M at a $1.65B valuation

Solarisbank, the tech company with a banking license, (whose platform is built using Erlang and Elixir) will use the new funding to acquire Contis and expand API-based embedded banking tech in Europe. Solaris was one of the first to be a fully-fledged bank that offers Banking-as-a-Service which is one of the FinTech segments (along with embedded finance) that has thrived over the pandemic period.

Read more

FCA loses £300k worth of electronic devices

In a do as I say not as I do comedy own goal, the FCA has misplaced a total of 323 electronic devices (estimated worth £310,600) over the past three years, according to a freedom of information request.  The devices are predominantly made up of hundreds of laptops, tablets, desktops and mobile phones reported lost or stolen by FCA employees. Unsurprisingly, this raises questions about data protection standards at the industry regulator. 

Read more

Verizon and Mastercard partner to bring 5G capabilities to payments

The strategic aim is to integrate 5G into payments focusing on contactless shopping, checkout automation and Point of Sale (POS) experience solutions. It is stated that this will be achieved by harnessing the latest in IoT technology alongside real-time edge computing.

Read more

More content from us

Kivra – Nordic FinTech case study for digital document sending platform

Memo Bank’s story  – How they used Elixir to build a bank from scratch

State of play in FinTech – I take a high-level look at some of the industry trends of 2021 so far

Kim Kardashian’s cryptocurrency Instagram post – the ‘financial promotion with the single biggest audience reach in history’!

When ultra influential influencers meet newly developed tokens, what could possibly go wrong? Well potentially plenty according to the head of the FCA, Charles Randell, who called Ethereum Max (nothing to do with the Ethereum platform) ‘a speculative digital token created a month before by unknown developers’. Read more

One in four UK financial services workers want to work from home full-time

 A new survey from Accenture has found 24 per cent of the UK’s 1m financial services workers “would prefer to work entirely from home once a full return to office is possible”. Read more

Klarna joins leading climate change programmes

The Swedish BNPL unicorn is the first FinTech to sign up for The Climate Change Pledge and the Race to Zero campaign. Read more

Erlang Solutions byte size

Did you miss joining our livestream of “What’s Next for Blockchain in Financial Services’ during FinTech Week London? Well, don’t worry you can get exclusive early access to the full video of the panel debate here.

Code BEAM America – Created for developers, by developers, the conference is dedicated to bringing the best minds in the Erlang and Elixir communities together to SHARE. LEARN. INSPIRE. over two days November 4-5

Trifork Group (our parent company) reports revenue growth of 55% in Q2 and 46% in H1 2021. The Q2-2021 Interim Report can be downloaded here. In Q2, Trifork Labs continued its active investment strategy and increased investments in the new Fintech startups Kashet, a new mobile-first challenger bank in Switzerland, as well as in a joint-venture Fintech startup (Money), co-owned by three mid-sized banks. Trifork has entered an integration partnership with Modularbank, a cloud-native core banking as a service solution.

To make sure you don’t miss out on any of our leading FinTech content, events and news, do subscribe for regular updates. We will only send you relevant high-quality content and you can unsubscribe at any time.

Connect with me on LinkedIn


The post FinTech Matters newsletter | September 2021 appeared first on Erlang Solutions.

by Michael Jaiyeola at September 17, 2021 11:44

September 05, 2021

The XMPP Standards Foundation

The XMPP Newsletter August 2021

Welcome to the XMPP Newsletter covering the month of August 2021.

Many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, especially throughout the current situation, please consider to say thanks or help these projects!

Read this Newsletter via our RSS Feed!

Interested in supporting the Newsletter team? Read more at the bottom.

Other than that - enjoy reading!

Newsletter translations

Translations of the XMPP Newsletter will be released here (with some delay):

Many thanks to the translators and their work! This is a great help to spread the news! Please join them in their work or start over with another language!


XMPP Office Hours - Also, checkout our new YouTube channel!

Berlin XMPP Meetup (remote): Monthly Meeting of XMPP Enthusiasts in Berlin - always 2nd Wednesday of the month.


What is project XPORTA? As announced in the April ‘21 newsletter, the Data Portability and Services Incubator at NGI is sponsoring the XMPP Account Portability project named XPORTA. This month they host an interview with Matthew Wild about how this project came into existence.

The “have your own TelCo based on XMPP” service has a new blog, with a twist, now based on Libervia so based on XMPP, with all the nice blog features that you want (like RSS) and even subscriptions via XMPP (with compatible clients like Movim or Libervia). The post announcing the new blog also covers the new registration flow and billing system. But the previous post is the real jewel, called Adventures in WebRTC: Making Phone Calls from XMPP. It details the journey through WebRTC debugging, multiple clients, NAT, ICE and all monitored through Wireshark. Get a hot or cold beverage to go with this about 70 minutes long read.

In the previous newsletter we mentioned that Debian Linux 11 will soon be launched with updated XMPP software, as this happened in the meantime, server admins are already updating or even setting up new deployments. Such as Nelson from Luxembourg, who published a blog post about setting up a server with ejabberd on Debian 11 Bullseye.

While the Snikket iOS client app was just released, read more below, the behind the scenes development continues. In the latest blog post, Matthew Wild announces that the expert folk at Simply Secure will be performing a usability audit of the current app, as well as conducting usability testing thanks to funding from the OTF’s Usability Lab. The analysis will help to improve the UX of the iOS app and Snikket as a whole.

Missed in last month’s issue, the folks at cometchat have blogged about XMPP’s history, working architecture, stanzas and features in general in Everything About XMPP - Extensible Messaging & Presence Protocol. If you want a quick technical overview (or need one to show others what XMPP is all about) this ~15 minutes read can bring one up to speed.

“Spaces” are the new XMPP frontier to be explored, and you’d get a glimpse of them in Gajim client news below, but the work is pretty elaborate and ongoing with many people involved. Renga’s (an XMPP client for Haiku) developer pulkomandy has blogged Some random thoughts about XMPP spaces thinking about use cases (family, business, communities) and user interfaces.

Any Turkish speakers reading the newsletter? We don’t have a translation yet, but Ged has just published an in-depth blog post about XMPP titled Hangi “Chat” Programı?. In about 40 minutes it takes the reader through the story of the protocol, tells about apps, servers, comparisons with popular apps and privacy.

The March `21 newsletter brought the news that JSXC (the Javascript XMPP Client) got funding to work on group chat calls. This month they report on the work done and explain the current progress that can even be tested.

Finally, how does FaceTime work? They interestingly use the same port (5223) as XMPP does…

Software news

Clients and applications

Gajim 1.4 Preview: Workspaces. The Gajim team has been hard at work in the past months to prepare the next v1.4 release. The upcoming version brings a major interface redesign. In this post, they explain how the new interface works and what remains to be decided or implemented before the release.

Gajim Workspaces (preview)

Libervia progress note 2021-W31 is out with information about Docker integration, the translation portal and the first 0.8.0 beta. It also has plenty of details about the work done on the ActivityPub Gateway project (grant announced in the April ‘21 newsletter) with SQL, DBus, PubSub and with new and updated XEPs.

Communiqué is a new XMPP client from the Mellium Co-op team. It was announced this month and presented at the XMPP Office Hours (unfortunately recording did not work out). The source code can be found in the repository.


Monal 5.0.1 is now available for both iOS and macOS bringing mostly corrections and more polish over the previously major release.

JSXC Openfire plugin gets a 4.3.1-1 release, with mostly bug fixes and improvements from the JSXC project.

After so many months of waiting the Snikket iOS app is now publicly released. Snikket server admins can add the app to the invitations pages to have Apple users easily find it. If you are not running Snikket you can still use the app (you can use credentials directly) but do read the blog post to know what you need to add to your Prosody instance (invitations modules) or what limitations you might experience using any other server software.

Snikket on iOS


Prosody 0.11.10 has been released with a fix for CVE-2021-37601 and some minor changes. Prosody developers recommend server admins to upgrade in order to fix the remote information disclosure issue.


The Mellium Dev Communiqué for August includes updates to the Mellium XMPP library as well as the new Communiqué instant messaging client. The biggest updates this month are MAM and ad-hoc commands support! You can read more here.

Extensions and specifications

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).


The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs proposed this month.


  • Version 0.1.0 of XEP-0460 (Pubsub Caching Hints)
    • Accepted by vote of Council on 2021-07-21. (XEP Editor (jsc))


If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.


  • Version 1.21.0 of XEP-0060 (Publish-Subscribe)

    • Revert change from version 1.15.5 which changed meta-data to metadata in wire protocol. That was an unintended breaking change which has now been reverted. (pep)
  • Version 0.3.0 of XEP-0214 (File Repository and Sharing)

    • Revert change from version 0.2.1 which changed meta-data to metadata in wire protocol. That was an unintended breaking change which has now been reverted. (rm)
  • Version 0.3.0 of XEP-0248 (PubSub Collection Nodes)

    • Revert change from version 0.2.1 which changed meta-data to metadata in wire protocol. That was an unintended breaking change which has now been reverted. (rm)
  • Version 0.2.0 of XEP-0283 (Moved)

    • Re-write the flow with a more focused approach. (mw)
  • Version 1.1.0 of XEP-0429 (Special Interests Group End to End Encryption)

    • Add discussion venue after creation by the Infrastructure Team. (mw)
  • Version 1.24.0 of XEP-0001 (XMPP Extension Protocols)

    • Change “Draft” to “Stable”. (ssw)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Draft.

  • No Last Call this month.

Stable (formely known as Draft)

Info: The XSF has decided to rename ‘Draft’ to ‘Stable’. Read more about it here.

  • No Stable this month.

Call for Experience

A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

  • No Call for Experience this month.

Thanks all!

This XMPP Newsletter is produced collaboratively by the XMPP community.

Therefore many thanks to Adrien Bourmault (neox), Anoxinon e.V. community, anubis, Benoît Sibaud, emus, Sam, Licaon_Kter, nicola, seveso, SouL, wurstsalat3000, Ysabeau for their support and help in creation, review and translation!

Spread the news!

Please share the news via other networks:

Find and place job offers in the XMPP job board.

Also check out our RSS Feed!

Help us to build the newsletter

We started drafting in this simple pad in parallel to our efforts in the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. We really need more support!

You have a project and write about it? Please consider sharing your news or events here, and promote it to a large audience! And even if you can only spend a few minutes of support, these would already be helpful!

Tasks which need to be done on a regular basis are for example:

  • Aggregation of news in the XMPP universe
  • Short formulation of news and events
  • Summary of the monthly communication on extensions (XEP)
  • Review of the newsletter draft
  • Preparation for media images
  • Translations: especially German and Spanish


This newsletter is published under CC BY-SA license.

September 05, 2021 00:00

August 31, 2021


Snikket iOS app now publicly released

This is the announcement many people have been waiting for since the project began!

Opinions are often strong about which is the best mobile operating system. However, while it varies by region and demographic, wherever you are it’s very likely that you have Apple users in your life, even if you don’t use one yourself. We want to ensure that the platform you use (by choice or otherwise) is not a barrier to secure and decentralized communication with the important people in your life.

The lack of a suitable client for iOS was an obstacle to many groups adopting Snikket and XMPP. For this reason, today’s release of a Snikket app for Apple’s iPhone and iPad devices is a significant milestone for the project.

A community effort

It’s a journey that began late last year with the announcement that we would be sponsoring support for group chat encryption in Siskin IM, the open-source iOS XMPP client developed by Tigase.

The Tigase folk have been very supportive of our project, and I’d like to especially thank Andrzej for his assistance and patience with all my newbie iOS development questions!

There are many other folk who have also helped unlock this achievement. This includes everyone who helped to fund the development work - especially Waqas Hussain, the kind folk at and of course absolutely everyone who has donated to the project. The majority of donations are anonymous so it’s impossible to thank everyone individually, but the amount of support we’ve received as a project is amazing, and really gives us confidence in achieving even more ambitious milestones in the future.

Funding aside, we couldn’t have refined the app without help from our diligent beta testers - with particular thanks to Michael DiStefano, Martin Dosch, mimi8999 and Nils Thiele for their bug-catching and comprehensive feedback. Everyone participating in the beta programme has helped shape the app we’re releasing today.

What happens now?

We’ll be rolling out a Snikket server update shortly that will add a link to the iOS app from Snikket invitation pages. If you’re eager to make the app available to your users before then, you can add the following line to your snikket.conf:


After saving the file, apply the change with the command docker-compose up -d.

If you are using the Snikket hosting service, you will get an email soon that explains how to enable the app store link for your instances.

We’re not done yet

This is a big milestone, without a doubt. But we’re not completely done. The app is not perfect (yet!) and we’re still working on many things. But we believe this is no reason not to share it with the world as early as we can.

Push notification compatibility

The first thing to note (especially as many non-Snikket users will also be excited about a new iOS XMPP client on the scene) is that our primary focus has been on the app working seamlessly with Snikket servers. We’re committed to XMPP interoperability, but time and resources mean we can’t develop and test every change in pace with every XMPP server.

Although we expect it to generally work, there are some known compatibility issues currently. Specifically, due to the strict “no background network connections” policy for iOS apps, we have needed to adapt push notification handling slightly differently to what is supported on most XMPP servers today. The extensions we use are openly published by Tigase, and we have made available community modules for Prosody (mod_cloud_notify_encrypted, mod_cloud_notify_priority_tag and mod_cloud_notify_filters), and discussion has begun on moving these extensions over to the XMPP Standards Foundation standards process. We welcome help and contributions towards evolving XMPP’s current push notification support. If you’re interested, reach out!

Until then, although some backwards-compatibility considerations are in the app, this means it’s very possible you may experience issues with notifications on some non-Snikket servers when the app is closed (though Tigase servers and Prosody servers with the community modules enabled should be fine).

Language support

The app is currently only available in English, which is an unfortunate contrast from all other Snikket projects which are available in many languages already.

Updating the app to support translation of the interface is high on our priority list. After this is implemented, we will also be looking for help from translators, so stay tuned for further announcements.

Other work in progress

Other known issues that we are working on:

  • Notifications for OMEMO-encrypted messages show a potentially-confusing message about the app lacking OMEMO support. This will be fixed by the same server update that adds the app to the Snikket invitation page.
  • Group chat notifications are not yet working. This will also be rolled out as a future server update.

Of course, we will also soon be incorporating feedback from the usability audit and testing sessions when that work is completed.

I want to say a final thanks to our entire community for supporting the project. Snikket has ambitious goals, and the progress we’re making couldn’t be achieved without all the help and support we’ve received.

Drop us feedback about the app if you try it out, file bug reports and feature requests to help us with planning and, if you can, donate to help sustain the development of the entire project.

We look forward to welcoming more users to the XMPP network than ever before!

by Snikket Team ( at August 31, 2021 14:00

August 27, 2021


Gajim 1.4 Preview: Workspaces

The Gajim team has been hard at work in the past months to prepare the next v1.4 release. The upcoming version brings a major interface redesign. In this post, we explain how the new interface works and what remains to be decided or implemented before the release.

Of course, your feedback is important! No interface can please everyone, so please react to this post with how this change would impact you positively and negatively, and ideas you have to make it even better before the release.

This blog post is in part based on the Gajim 1.4 UI/UX Preview given by lovetox, a current maintainer of Gajim. So if you prefer the video format, click on that Youtube link or use your favorite Invidious instance to view it with a lightweight, privacy-friendly client. That presentation was given as part of the XMPP Office Hours programme, where you can find other interesting presentations about the Jabber/XMPP ecosystem, or propose your own!

Single-window application

The main change in Gajim’s new release is that, in the current implementation, it becomes a single-window application. We’ve been used for over a decade to have separate windows for the contact list (roster) and for chats. This user interface pattern was common with early 2000’s messengers such as MSN and ICQ.

In the upcoming release, we make Gajim a single-window application, where all features are always within your reach. This change is inspired by more recent messengers such as Element, Discord or Mattermost (among others). This is what it looks like so far:

Gajim’s new main window

Gajim’s new main window

Some people feel left out by this new feature and the removal of the multi-window mode, however we hope to reconcile our users’ needs as part of the Gajim project, as explained in the Areas for improvement section of this blog post.


Gajim v1.4 will introduce a new concept: workspaces. Previously, all tabs were considered equal as a flat list within a window. We understand the need to organize some activities into a specific context, but without multiple windows, we organize these activities by workspace.

A workspace is a collection of group chats and private chats, organized client-side. For the moment, this is a non-standard, Gajim-specific feature, but standardization efforts are explained in the Areas for improvement section.

We introduced a new sidebar on the left of the window which allows to navigate your workspaces and accounts. After clicking any workspace, the chat list will be displayed in the sidebar. This chat list, to the right of the workspace list, provides navigation for chats (both group chats and private chats) within the current workspace. The currently focused workspace has a colored bar indicating it’s the current context.

Below the workspace list, the sidebar lists your accounts. Clicking an account will display a page containing the contact list, your avatar, a status selector, and a list of pending notifications. Contacts in the contact list are organized by roster groups, as was already the case in previous versions.

Account context

Each account is attributed a specific color, in addition to its avatar. This color is reused in the chat list, alongside the tab’s avatar so you can see instantly which account of yours is used in a specific chat. When a given chat/account doesn’t have an avatar defined, one is generated from the first character of its displayed name.

Gajim with multiple accounts

Gajim with multiple accounts

When a notification is received within a certain workspace, an indicator with the number of unread messages will be shown on the workspace icon and on the chat.

Organizing your interface

Workspaces can be reordered manually within the sidebar by drag-and-drop. However, these two different types of context are kept separate: the workspaces appear on top of the list, while accounts are listed on the bottom. When there’s too many entries to display, the workspace/chat list becomes scrollable.

Chats can also be moved from one workspace to another, though not via drag and drop: simply right-click a chat and from there the “Move to” menu will move the selected chat to the requested workspace. However, it isn’t possible currently to copy a chat to another workspace; moving an entry to a new workspace will remove it from its previous workspace.

Within a given workspace, chats can be pinned. These stay in place at the top of the workspace’s chat list. Chats which are not pinned are ordered by latest activity. This way you never have to scroll endlessly to find the chat that matters to you. For the moment, pinned tabs cannot be reordered like workspaces, but we plan to implement it.

Try it out and let us know

There’s a lot of upcoming major changes in the next Gajim v1.4 release, so stay tuned to the blog for further information. In the meantime, you can test the new interface by running Gajim from sources using just a few commands. This feature is not published in nightly releases yet because it’s still unstable, so do not use it as a daily-driver yet.

Important: Note that you have to start Gajim with a test profile using gajim -s -p testprofile or -s -p testprofile in order to preserve your current profile. Migrating back is not possible.

  • git clone && cd gajim to download Gajim’s source into a gajim folder and moving there
  • git checkout mainwindow to browse the development branch with the new UI
  • pip install . to install Gajim’s development version and all dependencies to your python environment, then gajim -s -p testprofile to start
  • alternatively, ./ -s -p testprofile to start Gajim without installing it, in which case dependencies should be manually setup first (for example On Ubuntu)

Feedback is welcome in any form, whether on our issue tracker, in our community chat, or as a blog post on your own website. The main tracking issue for this new user interface is #10628.

Areas for improvement

In this section, we explain the shortcomings of the current implementation of the workspaces feature, and what could be done to improve it. We are actively looking for ideas on these areas, so if you can afford it, please spend some time to gather your thoughts and help us improve Gajim.


Account context relies on user-supplied colors. However, for accessibility concerns (color-blindness), we would be interested to support other graphical patterns instead of colors. For example, dots and dashes and other visual patterns that are common in graphs and tables. However, unless we get more contributions, it’s unlikely this feature will be released in v1.4.


The main window redesign does not support right-to-left (RTL) languages in a special way yet. The navigation sidebar will be displayed on the left-side of the screen in all cases.

UI customization

Some users have already expressed their anxiousness at the idea of dropping support for multiple windows in Gajim. However, there is technically no barrier preventing us from reimplementing is with our new User Interface. It’s “just” a lot of hard work.

For example, maybe we could have a mode where each account gets its own window that could move around separately? Or focus a space from the main window into its own window? That would be useful when using virtual desktops (sometimes called workspaces, what a coincidence) in your favorite desktop environment.

In addition, we could explore to support multiple sidebars on multiple axis, so that you could decide where to place your accounts list, and divide your workspace list into a top and bottom sidebar.

Only your imagination and contributions to the Gajim project are the limit for the kind of experience we can provide, but it’s very unlikely deeper UI customization will be implemented in time for the v1.4 release. We are a volunteer-run project and cannot afford to spend time to accommodate every single need there is, although contributions are always welcome.

More workspace organization

Currently, pinned tabs in the chat list cannot be reordered in the way that workspaces can be in the workspace list. Would this be useful for you?

Moreover, Gajim’s new workspaces UI currently features a 2-level representation like Mattermost, where any chat only has a single ancestor workspace. The account roster is an exception, because it features a 3rd-level nesting in order to fit roster groups, where each entry is part of a group, which is part of the account workspace context. Maybe workspaces could benefit from this approach in order to represent 3-level hierarchies akin Discord/Element interface.

Also, a chat can currently only be featured in a single workspace, for the sake of simplicity. That’s a fine assumption as long as workspaces are managed by a single user for their needs, but would not play well with sharing workspaces with other users, in which case a chat may appear more than once in the workspace tree.

Standardization and interoperability

As mentioned briefly, we’re considering how our new workspaces feature can be represented server side, so that it can be used by other clients, and maybe even shared across users.

Sharing a workspace with several users, similar to Matrix “spaces” or Discord “servers” could prove very useful for online communities administering a bunch of channels, for example to set space-wide permissions. It could also enable to subscribe to a public workspace maintained by a contact of yours, featuring a bunch of 3rd party group chats on a specific topic.

While there is not yet a specification for such hierarchical organization of chats in the XMPP ecosystem, there was an XMPP Online Sprint last winter studying Discord’s user experience in order to benefit the Jabber/XMPP ecosystem.

More recently, some people have started to gather thoughts that should lead to a specification. There is a work-in-progress document (a pad) which anyone can edit with feedback, and a group chat has been setup to discuss this issue in a cross-project manner. Your ideas and contributions are more than welcome, even if you’re not familiar with the Jabber/XMPP ecosystem. Feedback on how a new specification could be made interoperable with other decentralized networks is very welcome.


August 27, 2021 00:00

August 23, 2021


Improving Snikket's usability in collaboration with Simply Secure

One of the primary goals of the Snikket project is improving the usability of open communication software. We see usability as one of the major barriers to broader adoption of modern communication systems based on open standards and free, libre, open-source software. By removing this barrier, we open the door of secure and decentralized communication freedom to many vulnerable groups for which it was previously inaccessible or impractical.

Simply Secure is a non-profit organization working in user interface (UI) and user experience (UX) design. They specialize in combining human-centered design with the complex technical requirements of privacy-first secure systems. Our first introduction to Simply Secure was while contributing to Decentralization Off The Shelf (DOTS), a unique and valuable project to document and share successful design patterns across the decentralized software ecosystem.

Now, thanks to funding from the OTF’s Usability Lab, we’re pleased to announce that Simply Secure will be working with us over the coming months to identify issues and refine the UX across the project, with a special focus on our iOS app.

We’ve made a lot of progress on the Snikket iOS app recently, largely based on valuable feedback from our beta testers, and we are getting excitingly close to a general release. However there is still some work to be done.

The expert folk at Simply Secure will be performing a usability audit of the current app, as well as conducting usability testing, which is the study of how people use the app, and what struggles they face while completing specific tasks.

Using information from these analyses the Simply Secure team will assist with producing wireframes (sketches of what the app’s interface should look like) and actionable advice to improve the UX of the iOS app and Snikket as a whole. You will find information on how to participate later in this post.

What is UX anyway?

The modern UX design movement is a recognition that technology should be accessible and easy to use for everyone. Good design can assist and empower people, poor design can hinder and even harm people. The need for design goes far beyond making a user interface look beautiful. Software that is not visually appealing may affect someone’s enjoyment of an application, but an aesthetically-pleasing interface is not magically user-friendly.

Therefore designing for a good user experience is about more than just making the interface look good, it’s about considering how the software fits into a person’s life, what they need from the software (and what they don’t need) and how they expect it to behave.

These are tricky things to get right. Every user is different, and a broad range of input must be taken into consideration as part of a good design process.

UX methodologies

There are various ways to gather information useful for making informed decisions about UX improvements. A common easy and cheap approach is to add metrics and analytics to an app. This can tell you things like how often people tap a particular button, or view a particular screen. Developers and designers can use this information to learn which features are popular, which should be removed, or made more visible.

This approach has drawbacks. Firstly it only tells you what users are doing, it doesn’t tell you why they are doing it, or what they are thinking and feeling - for example if they are frustrated while looking for a particular feature or setting. Metrics can tell you that making a button more prominent increased the click rate, but it won’t tell you if half the users who clicked on the button were expecting it to do something else! This isn’t really going to give you enough information to improve usability.

Another significant drawback with a focus on metrics is the amount of data the app must share with the developers. People generally don’t expect apps on their device to be quietly informing developers about the time they spend in the app, what they look at and what buttons they press. Such data collection may be made “opt-in”, and there are modern projects such as Prio, working to bring privacy and anonymity to such data collection through cryptographic techniques.

A wildly different but much more valuable approach is to directly study people while they use the app - a technique known as “usability testing”. Unlike silent data collection, usability testing directly pairs individual users or groups with an expert while they are asked to perform specific tasks within the app. Although this requires significantly more time and effort it produces more detailed and specific insights into the usability of an interface.

Advantages of this kind of study include the ability to listen and learn more deeply the needs of specific types of users, particularly minorities whose problems could easily be drowned out by larger groups of users in a simple statistics-driven data collection approach. It also allows you to capture peoples' thought processes, by asking them to explain each step as they complete tasks within the app.

Participation and looking forward

We can’t wait to begin our first usability testing facilitated by the experienced team at Simply Secure, and incorporate their findings into Snikket’s development.

If you’re interested in taking part, or know someone who would be a good fit for this project, we’d love to talk to you for 30 minutes to better understand how to improve Snikket. There will be no invasions of privacy as a result of this research. All identifying information will be removed. We will take all necessary and appropriate precautions to limit any risk of your participation. Anything that we make public about our research will not include any information that will make it possible to identify you. Research records will be kept in a secure location, and only Simply Secure and Snikket personnel will have access to them.

Appointment slots are available from 24th August to 3rd September.

Update: The usability testing phase of this project has now ended. Many thanks to everyone who participated, and helped spread the word!

Further reading

by Snikket Team ( at August 23, 2021 10:00

August 19, 2021


Newsletter: Blog, New Registration, New Billing, New App!

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it's been a while since you checked out JMP, here's a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client. Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

In case you haven't seen it yet, we now have an XMPP-powered blog! All newsletter updates, as well as other content like technical deep-dives will be published there. If you just want these updates, don't worry, the mailing list isn't going away. You can check out the blog at and follow in your RSS reader or compatible Jabber client such as Movim or Libervia.

JMP also has a new registration flow. This flow properly integrates with our new billing system and represents a lot of behind-the-scenes work to our architecture. The most important part of the new billing system is the referral system. That's right, JMP users can now get single-use invite codes to refer users. The new user gets one free month, and if they decide to upgrade to a paid account the original user will get a free month of credit too! XMPP server operators for closed or vetted groups can also contact support to ask that their server be added to an approved list where all Jabber IDs coming from that server will be given a free month, with the resulting credit if they upgrade going to the server operator.

Speaking of our new billing system, many users have been fully migrated to the new architecture which says goodbye to PayPal and hello to automated credit card and Bitcoin deposits, as well as official support for payment by mail or (in Canada) Interac e-Transfer. Payments can also be made in Bitcoin Cash by contacting support. Users on the new system now have a prepaid balance they can top up any time they like, with the option to automatically top-up a low balance with any amount $15 or more from credit card. Deposits over $30 get a 3% bonus added, and deposits over $140 get a 5% bonus. This paves the way for calling minutes over 120 / month (which will soon be available at the rate of $0.0087 / minute) and also international calling at per-minute rates to be announced later this year. Those who prefer to pay the same amount every month or year, as is done with our legacy PayPal system, will need to wait a bit until we integrate that option into the new system.

We've also had a volunteer working with us to prepare some new features for Android users, most notably DTMF (punching in numbers during a call) so that all phone calls can be done from inside Conversations. The code isn't quite ready for upstream yet, but drop by the chatroom if you want to try out a prototype.

As always, if you have any questions, feel free to reply to this email or find us in the group chat per below. We're happy to chat whenever we're available!

To learn what's happening with JMP between emails like this, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by singpolyma at August 19, 2021 00:30

August 18, 2021

Erlang Solutions

FinTech 2021 State of Play

While things have undoubtedly changed considerably for the financial services industry over the past 18 months, the ascendency of FinTech remains quite unabated, with global FinTech investment reaching $98bn. In the UK, FinTech investment hit a new record of £18bn in the first half of 2021, placing it second only to the United States, impressive during a time of considerable uncertainty brought on by the pandemic and short term Brexit fallout.

The FS industry has proved extraordinarily resilient and indeed many segments have even thrived during this period, such as FinTechs operating around digital payments and processes. 

The overarching trends for the industry are the accelerated digitalisation of banking, adoption of embedded finance (including buy now, pay later) and decentralised finance to further democratise access and opportunity to financial services. In this post, we take a look at these along with some general high-level trends in software engineering in the sector.

Digital payments and eCommerce growth

Disruption and innovation in payments technology are constant; we have been at the cutting edge of real-time payments through our work with Vocalink (a Mastercard company) to build their Immediate Payment System used globally by the likes of The Clearing House in the US and the P27 group in Scandinavia.  A recent Mastercard report found the first quarter of 2020 had a larger shift towards digital payments in 10 weeks than in the preceding five years and that consumers spent nearly $900bn worldwide with online retailers in 2020. Less than a year since contactless limits increased across Europe, Visa has hit one billion additional touch-free transactions, 400 million of which took place in the UK. 

In light of this surge in digital payments volume and to better align with customer preferences and capitalise on advances in payments technology, stakeholders such as issuers, networks, payments processors, and merchant acquirers are investing heavily to retool their payments systems. Meanwhile, embedded finance, point-of-sale lending and buy-now-pay-later financing products are reshaping the lending and payments experience to create faster digital options with less friction. 

Millennials and Generation Z were already used to managing their financial affairs through digital channels, and they are now joined by many other demographics and late adopters. This means there will not be a significant rollback to the old ways of interacting with financial products, presenting a clear opportunity for innovators within B2C FinTech.

FinTech Software Engineering Trends

Over recent times, leading banks were already modernising as part of a strategy to exit or manage legacy core systems that inhibit faster and more transformative technological innovation; more resources will now be diverted to the strategy. Those slower to embrace genuine digital transformation have been pushed towards modernising with short term fixes that will require permanent solutions.  

From our position as specialists in soft real-time distributed backend systems, what has become abundantly clear is that resilience and scalability are not a given for all FinTechs, neither is it good enough as an afterthought. The risk to reputation and trust when problems occur due to system stress can inflict damage that is hard to recover from. Using technologies that have scalability baked in and are highly reliable such as Erlang and those that run on its BEAM virtual machine, may not only help avoid technical debt for startups but may actually save your entire business during unexpected challenges like those of the current moment. Interestingly, a lot can be taken from Erlang’s success in the telecoms industry (having originally been developed at Ericsson) which can be applied to many FinTech use cases – check out this post by our Nordics MD, Erik Schon: How Telcos can help FinTechs succeed.

In successful FinTech stacks, services that are loosely coupled and readily upgradeable are the norm. It’s advisable to not rely on just one software vendor to avoid damaging lock-in. Instead, FinTechs build their own ecosystem of high-performance technology providers that can be added to, upgraded and replaced as required. 

On the product side, constant iterations are necessary to stay ahead of the competition. The providers that meet or surpass customer expectations by offering value-added services designed for specific segments are building loyalty and taking market share. Although the clients facing frontend must deliver from a CX and UX perspective, this has to be backed with reliable infrastructure with minimal downtime and other disruptions. Read our summary of Memo Bank’s adoption of Elixir for full-stack development; they have just successfully secured a new funding round of €13 million.

As I previously stated, these trends are not new, they are in fact much followed principles in the software engineering world and in FinTech, but they are proving even more important as guiding principles in the new environment we are working in.

DLT / Blockchain

In terms of media spotlight, cryptocurrency has been the most talked-about part of how monetary models are changing. We have worked with Distributed Ledger Technology (blockchain), the underlying technology of crypto, for quite some time and recognise the potential to offer exciting enterprise solutions in FS. Blockchain network and protocol layers are now widely accepted as robust and stable, ready for innovation to move to the application layer where the real opportunity for differentiation lies. We recently co-hosted this interesting panel debate on blockchain use cases in FS as part of Fintech Week London – Get the recording here.

As the higher stack application layer is more nascent, this is where using developers who are experienced becomes especially valuable to avoid wasting time and resources. The Erlang Solutions team consists of domain experts who have been involved in a wide variety of innovative blockchain projects. We have observed PoCs are increasingly being replaced with deployments to real production systems in FinTech and beyond. While most use cases remain private and permissioned, this is still an encouraging indicator for anyone interested in leveraging the technology. 

What’s Next For FinTech

The FS industry will need to make permanent some of the learnings of the lockdown periods to create more agile workforces which boost productivity, creativity, and collaboration. FIs will look to increase investment in FinTech to stay competitive, not only in customer facing digital tools but also in the back office space as a means to improve processes and reduce costs. Software engineering has become the core of value creation, and the methods used can significantly influence business results, especially in fast-moving sectors such as FinTech.

Erlang Solutions have over 20 years of experience building critical digital infrastructure that scales to billions of users without downtime. Talk to us about how we can help you develop a future proof system that is faster, easier to maintain, more reliable, and cheaper to run. You can read our Founder and Technical Director, Francesco Cesarini’s account of how Erlang Solutions have applied lessons from the 2008 crisis to our operational model today.

To receive our whitepaper report on the Trends In FinTech 2021 please, sign up for our FinTech Matters newsletter here where you can receive that and other exclusive industry content (you can unsubscribe at any time).

The post FinTech 2021 State of Play appeared first on Erlang Solutions.

by Michael Jaiyeola at August 18, 2021 12:00

August 17, 2021


Adventures in WebRTC: Making Phone Calls from XMPP

Normally when I'm writing an article, I prefer to not front-load the article with a bunch of technical stuff and instead bring it up organically. This article is different, though, and if anyone is going to get anything out of this I've got to set up a bit of background, so bear with me. I'll go into more detail on these things when it makes sense, but I have to at least introduce the players in our little play.

First, we have calls. You probably know about these, phones aren't exactly new, but I want to clarify that these are voice calls, as you may expect from a phone. One person calls the other, it rings, they pick up, and then audio conversation can be had.

Second, we have XMPP which is a chat protocol that's been popular in the FOSS (Free and Open Source Software) community for years. It has also made appearances in various commercial offerings, from Google's GTalk, Facebook's Messenger (at one point), and others. The big feature XMPP has compared to other chat protocols is that it's a standard, which means there are multiple implementations of both clients (the program the user uses) and servers (which host the user's account). They also have a large collection of extensions which extend the standard to provide more features which clients and servers can choose to implement. Also importantly it's "federated", which means users of one server can talk with users of another server seamlessly.

Third, we have Jingle, which is one of the previously mentioned extensions that allows two XMPP users to setup a peer-to-peer data connection, in this case for exchanging voice data.

Fourth, we have Asterisk. Asterisk is an open source telephone system. You can use it to receive or send phone calls, setup phone trees and extensions, etc. Because it supports many different protocols for sending voice data it can be used to connect a call from one protocol, like Jingle, to another protocol.

And finally, JMP, which is the company I'm working for. JMP integrates XMPP with the mobile phone network, giving XMPP clients a user they can contact representing any phone number, and we'll turn the chats and Jingle calls into SMS messages and phone calls and vice-versa. This allows JMP's customers to use a client of their choice, across mobile devices and desktops, to communicate with people who are still using SMS and traditional telephones and haven't moved to XMPP yet.

Ok, now we can establish the starting point of our story. We have SMS working in both directions already, and thanks to the work of my co-worker singpolyma and a user named eta's patches, we already had phone calls coming in properly. But allowing our users to call-out to phone numbers, that is, from XMPP to cell phones, wasn't working yet. We figured it was just a small tweak to the existing setup, so I set out to find the simple change that was required.

And now, over a month later, here's the path I went through, with gratuitous technical details along the way.

My Initial Testing Setup

It started out pretty simply. I have an app on my phone, Conversations, which is one of the XMPP clients we recommend to our users. I tested that I could receive calls, and everything worked out well. But, when I tried calling a special user created just to test outbound calls my phone would ring, and I would answer it, but the app would just see "connecting" forever.

I tested a normal Jingle call to another user of the same app, and it worked in both directions, so I knew the app worked fine. I didn't see anything in the logs from my phone to describe what might be the problem, so I had to look somewhere else. There were some logs in Asterisk, which is acting as a bridge between Jingle and the phone network, but nothing that stuck out right away. In order to get a more information about what was actually happening, I wanted to get into the code and put my own debugging logic. Luckily, Conversations is FOSS, and so that's very possible. But, it was also inconvenient to install a development build of Conversations onto my phone, because that would replace the app I used every day. I also wanted to use more tools I had on my computer, so I decided to install Movim instead. Movim is another XMPP client that supports the same voice calling features, but it runs on the computer. In a web-browser, in fact. Normally it would run as a hosted setup, where someone runs a Movim server for you out on the web, and you just connect to it as a web-page, but given what I wanted to accomplish I had to run the server myself on my computer so I can get to both sides of this conversation. A quick test confirmed that Movim could send and receive XMPP-to-XMPP calls just fine, but had the same issue with XMPP-to-Phone calls, so it wasn't just a Conversations bug.

I was now ready to start actually digging into this problem.

Initial Jingle Debugging

XMPP, as a protocol, works by sending XML stanzas over a connection. This means it's very easy to just look in some logs and see what's actually going on. For example, if I were to send a normal message it may look something like this:

<message type='chat' id='dd75a234-8f44-46ff-bd37-c878d04aef92' to=''>

That makes it pretty easy to debug, which is why that's where I started. My initial theory was that there was something wrong with the messages being forwarded back and forth from my users to Asterisk that was making it impossible for them to talk to each other. To investigate that, I changed my local version of Movim to log every XMPP blob it sent and received, so I could inspect them with my human eyes and brain to make sure they look legitimate. To my disappointment they did appear to be relatively sensible on both the Asterisk and Movim side; the information they exchanged seemed to contain accurate addresses and formats. It would have been nice to have been done here.

Oh well! Undeterred I traced through the Movim code to figure out where the information eventually ended up. Somewhere in the code Movim must use these exchanged addresses to establish the voice connection. Eventually I got down to a line in the front-end JavaScript that just took the result, changed it from the format of the data and then called setRemoteDescription on some peer connection object. That's part of the WebRTC APIs. Crap.

Detour 1: WebRTC

WebRTC is a set of APIs supported by modern web browsers that allow users to establish peer-to-peer connections. Normally when a user is visiting a webpage, even a page that gives the appearance of interacting with other users, that interaction is actually done through the web server. To illustrate the difference, let's assume that two people, named Romeo and Juliet, are using a webpage to chat. Romeo visits the webpage; contacting the server and requesting the chat page, which is sent to his browser to show him a box to type things in and the messages he's already received. He sees a new message from Juliet, so he types his response into the box and hits send. What actually happens is that Romeo's browser sends the message he typed to the server along with information about who should receive the message. The server will take the new message and store it along with the other messages in the list of messages Juliet has received. At some point in the future Juliet will load this page and in doing so the server will send her all the messages that have been stored for her, including this new one. If the site is fancy, she won't even have the refresh the page to get them! Maybe it's periodically asking the server if there are any new messages so long as she's on the page, or maybe there's even a WebSocket waiting to be pushed messages, but in either case Romeo's message goes from his computer, through the server to get stored, and then is retrieved from that server when Juliet's computer requests the messages.

That's fine for occasional, short, bits of text or even the occasional picture. But if we imagine that Romeo instead wanted to start an audio or video chat, that's a whole other thing. First of all, it's a lot more data and it's constantly generating more and more data as the call goes on. And that information is actually doubled, because Romeo has to send it to the server, and then the server has to put it somewhere, and then Juliet has to also pull it back down. So it would be nice for the server operator if they could send the messages directly to each other, rather than involving the server.

The second reason is latency, or how long it takes for a sent bit of data to be received. Even if the server could handle all the data Romeo was sending it, it will usually take longer for the data to go from Romeo, up to the server, then from the server down to Juliet, versus just going directly from Romeo to Juliet.

So the way WebRTC works is that the user's browser has support for a bunch of standards for forming peer-to-peer connections (person-to-person, user-to-user, browser-to-browser, whatever you want to call them). If Romeo wanted to start a video call with Juliet, he would send her a special message through the server as normal, but this message would contain information on how Juliet could contact him directly. If she wants to talk with him, she would respond with a special message of her own (also through the server) containing information on how she could be contacted directly. Code on each of their webpages would take the special direct-contact information the other party sent them and give it to the browser through the setRemoteDescription method I mentioned earlier to signal to the browser that it would like a direct connection to be established. The page doesn't have to know or care how that happens, it will just be told when it works or doesn't. And life will be good.

Back to Jingle

Ok, so if life should be good, why was I unhappy that the information I was tracing was going directly into setRemoteDescription? Well, because life wasn't good and things weren't working. And more importantly, it wasn't as easy to figure out what's going on in WebRTC. I had the code for Movim, I'd already changed it to get extra diagnostic information and I could easily add more code to tell me more things, but the browser's implementation of WebRTC is a bunch of C++ that's built directly into the browser. Even though the browser is FOSS, so the code is available, it's still much harder to make a change to it, build a new version of just that part, integrate that into the entire browser, and then build all of that just to test one small change.

So I wanted to avoid that if I could. For now, things were still ok. WebRTC is something the browser knows about, so it should have some kind of tools for WebRTC app developers to help debug things. I was using Firefox so I searched for "Firefox WebRTC debug" and got this. I installed that tool and was instantly disappointed. It told me essentially nothing that I couldn't have found out already. It doesn't give any insight into the inner workings of WebRTC, it just gathers up the events and properties any app could have subscribed to, and subscribes on my behalf, giving me a list of updates. I guess that's better than having to write that code myself, but in this case I just saw a list of updates that seemed reasonable, and still nothing worked. Not very helpful.

screenshot of Firefox's WebRTC debugging UI

The situation looked a little better on Chromium, so I switched over to that. There's a built-in special URI chrome://webrtc-internals/ that provides on the left of the page the same log of events that Firefox's plugin did, but on the right allowed me to peek inside all the various data structures in their current state and see what was going on. Finally I was getting somewhere!

screenshot of Chromium's WebRTC debugging UI

I started by comparing working calls (inbound calls) to broken calls (outbound calls), to get a sense of what was different about them. There were a few false starts here that ended up coming down to randomness instead. It seemed like inbound calls always had some events in one order, and outbound had them in another, so maybe it was a race condition there! But then 1 in 5 inbound calls would actually have them in reverse order and it would still work. It looked like maybe it would work only if srflx candidates were chosen (don't worry about what that means), but then it wouldn't again. I was grasping at straws trying to find a pattern. While I was looking through properties, I did notice one strange thing: one of the pieces of information that is exposed is the number of bits flowing through each candidate pair and each interface. I noticed as I was looking around that the peer connection's inbound data rate was 0, but one of the interfaces had data streaming into it. It just wasn't the interface that had been chosen for the connection... That was weird. It meant Asterisk was trying to send data to the wrong place. To understand what that meant we have to go on another detour!

Detour 2: NATs and ICE

So what are these "candidates", and "addresses", and why are we exchanging them, pairing them up, and choosing them? This is part of the standard called ICE (which has a newer version of the standard also called ICE). This standard also relies on another standard called STUN and optionally an extension to STUN called TURN. That all sounds pretty complicated, but what is it all for? The problem is NATs. First I'll give the basic, oversimplified, version of the problem. Then the basic, oversimplified, version of the solution.

The Problem: NATs

The way practically the entire internet works is that each device is given an address. Then, when one computer wants to talk to another, it wraps the data up into a packet, addresses the packet to the other computer, and then pushes it out the wire (or through the air) onto "the network". If the computer on the other end of that wire (AKA, the router) is the one the packet was addresses to then the packet is received! If not, but the router knows where to find that computer, the packet is sent along that wire and bounces along from computer to computer until it reaches its destination. There's a problem with this, though. In the original design for the internet the addresses were given enough digits to allow for roughly 4 billion different addresses in the best-case scenario. In reality it was much fewer than that. In the 80s that may have seemed like an enormous amount of computers, but these days a single human may have a laptop, tablet, desktop, and a phone. They also may have a thermostat, a fridge, a speaker, a television, and various light bulbs and sensors that are also on the internet. Even in the 90s a company may have had a building full of hundreds or thousands of computers, all for people that had their own computers at home. There was just more demand than expected. So, for this reason (among others) NATs became common.

The idea of a NAT is relatively simple. We have one side, the Local Area Network (LAN), and another side, the Wide Area Network (WAN). Think "inside my network" and "the internet". Inside my network I can give computers whatever addresses I want, so long as it's unique to only the computers that are inside my network. There are some strong recommendations on what kinds of addresses to give out, but technically I can do whatever. So if I have a LAN, and you have a LAN, we can both have computers on our LAN addressed "", and that's fine because the address is "local" to our own networks. So if one computer on my local network wants to talk to another, it works just like I described before: "" produces a packet for "" and sends it along its cable. The router sees the packet for "" and thinks "Ah, that computer is down this wire" and sends it along, and "" receives it. All is as it was.

But what if "" wants to talk to a computer on the internet? It addresses the packet the same as before, and sends it to the router the same as before. The router will look at the destination address and see that it's on the WAN side, which means the data has to go out onto the net. But, the internet is really only useful if packets can be responded to. I want to ask the internet for something, and get an answer back! But the router can't just tell the other side that this traffic is from "", because that's a private "local-only" address. Multiple networks could have a computer with that address and there'd be no way to figure out which was which! So what the NAT does is to rewrite the packet to put the router's own WAN address as the source of the message before forwarding it on. That way, if the other side does respond, then the response will come back to the router, and the router can figure out which request that response was for, rewrite the destination to be the original sender of the packet, and then forward the response into the LAN, rewritten to look like it's for the proper computer. The final piece is that packets aren't just addressed to a computer, but a combination of computer and "port" so the computer can handle multiple independent connections and know which data is associated with which connection. So if a packet goes out from computer "" and port "537", then the NAT might give it port "537", or "700", or "12432" on the WAN side. It can pick whatever it wants, what's important is that it remembers its choice. Then when a response packet comes back to port 12432 on the public side it can look up in it's table to see that really means at port 537 on the LAN side, so it knows how to rewrite it properly.

So, that's lovely. Now network operators and consumer network equipment can assign whatever addresses they want on the LAN, and none of it reduces the amount of addresses there are out on the net. People can still talk to the internet, and just works the way the computers expect. You can even put one NAT on the LAN side of another, and then put that on the LAN side of another NAT, and have a tree of NATs! Each packet that comes through gets translated by each layer, and then forwarded on to be translated by the next, and when a response packet comes in each layer performs the transformation and forwards "down" the tree until eventually it gets back to the original computer. So what's the problem? Why bring any of this up? It's because this system only works if everyone inside a NAT only ever wants to reach out to things "on the internet", like servers. But if you remember why you're reading this in the first-place, WebRTC is trying to setup direct peer to peer connections between two different internet users without having traffic go out to a server at all! NATs can't work in that way, because the whole point of their translation is that the computer's address is not something routable on the general internet; it's a local address that only exists within its own LAN. There's nothing you can do to tell a NAT, or the top NAT in a tree of NATs, how to send a packet it's received that it didn't already have a translation saved for

The Solution: STUN, TURN, and ICE

That last sentence really is the key to how we're going to overcome this limitation. If I want people to be able to talk to my browser from anywhere on the internet, what I can do is first send some packet out somewhere so my NAT and any other NATs between me and the internet will all make a translation that will allow responses to that packet to find their way back to me. Then, if somehow the other browser I wanted to communicate with could know the last layer of that translation, the "public" address and port that eventually made it onto the real internet, then they could send traffic there and all the NATs would perform their translation and I would eventually get the packet they sent! Magic! The problem, though, is that I don't know what that final "public" address and port are. Most routers don't expose that kind of information to the LAN, and even if they did there's no guarantee that their WAN address isn't just some other NAT's LAN address. Really we don't care about my router, or how many NATs there are, we just want to know what the internet sees.

So what STUN does is define a format for packets that one can use to find out this information. Then people can run STUN servers, either for public use, or maybe specific to the particular app that's looking to communicate. The STUN server runs on the public internet, and when I send it a packet asking it who I am, it will respond with a packet back to me. But inside that packet it will include the address and port it saw my request come from, which allows me to know my own "public" identity. It would be like calling someone to find my own phone number by asking them what number they see. There's more to STUN than that, but that's all we need for now.

This will work on many NATs, but not all, because of course it has to be difficult. Some NATs go further than what I've said so far. Rather than just remembering an association between a source address and port and whichever public port they pick, they remember all of the source address and port, and destination address and port, when picking an output port. That means if I send a packet to the NAT might pick port "3425" to use, but if I send from the same port on my end to instead rather than re-using 3425 it will pick a totally different port. Because of this, if I send a STUN packet out to address and it tells me my public address and port, and I give that to, then when it uses that address to try and talk to me it won't match my NAT's table, because there are no entries for at all. No packets were ever sent to that address, so no entry was made. These are called "symmetric NATs" by STUN, and completely ruins our plan.

To get around this TURN was created as an extension to STUN. TURN allows another kind of request to a STUN server which asks the STUN server to allocate a port on the STUN server itself, and forward any packets it receives there back to me. Then the STUN server tells me what its own address is, and what port it picked for me (very much like an opt-in NAT, actually). So now, even if I have a symmetric NAT, I can still give the other person the STUN server's address and my port there, and I know those packets will make their way back to me down the same connection I used to make this request. This isn't ideal. It's not really peer to peer anymore. Packets are still going through another server, which means the TURN server (really the STUN server that supports TURN) will need to be able to handle the volume of traffic, and there's still an extra jump adding latency.

There are a few silver linings, though. The first is that if Romeo needs to use TURN, that doesn't mean Juliet does. So any packets Juliet sends to Romeo may go through another server, but packets from Romeo to Juliet can go direct, which is still half as much traffic to the TURN server. Even if they both need to use a TURN server, there's no need for them to use the same one. That can mean that Romeo is talking to and Juliet is talking to, so each of those servers is only seeing half the traffic. Also, TURN servers are able to be more simple than app servers. They don't have to put anything into a database or anything like that, they just turn each packet they receive into one they send, without even knowing what's in it. This allows them to process more packets per second. And finally, it means the app developer doesn't have to implement both a peer-to-peer mode and a "that didn't work" mode. If they run TURN servers, then at the very least it will use that and run the same as if it was truly peer-to-peer.

So we have our actual addresses, which might work if we aren't behind a NAT. We have STUN addresses which will work for many people that are behind one or more NATs. And we have TURN addresses which should work for everyone. But TURN is more effort, so we'd rather use STUN if that works. And even STUN is more work than it needs to be if direct messages work, like if both of our two users are behind the same NAT. So what we need is a way to figure out which of these work and then pick the best one. This is what the standard ICE adds to STUN and TURN, and where we get to the "candidates" I mentioned earlier.

With ICE Romeo would find all the network addresses his device has, and maybe use STUN to find his public addresses and ports, and maybe even TURN to get a new public address and port. The idea is that any of these might work; they are "candidates" Juliet may be able to use to send packets to. Juliet will do the same thing on her end and get her own list of candidates. Then they send these lists to each other, then they would attach each of their own candidates with each of the other side's candidates and get a list of candidate pairs. So if Romeo had "A, B, C" and Juliet had "X, Y, Z", then both sides would make a list of candidate pairs "(A, X), (A, Y), (A, Z), (B, X), (B, Y), (B, Z), (C, X), (C, Y), (C, Z)". Now they go through each pair sending packets from their candidate to the other candidate. If one of them receives a packet then that means this direction works, at least. They respond with a success message, and then they will immediately send their own check for that pair back, if they hadn't already, to see if it works the other direction too. At the end of this process we will have tested all candidate pairs and we'll have a list of the ones that worked in both directions.

Along with the candidates ICE also has us keep track of what kind of candidate each one is, "host" for network addresses the device has directly, "server reflexive" ("srflx") for the ones we got from STUN, "relayed" ("relay") candidates from TURN, and "peer reflexive" ("prflx") which only comes up in cases where one side successfully receives a packet from an address that isn't otherwise a candidate, meaning there's some other network quirk between the two users. Each of these candidate types is given a priority based on our preferences, for example we'd prefer to use host over srflx, and we'd prefer to use srflx over relay. Then we can combine the priorities of each side of a pair to get a priority for each working pair. At this point we can simply pick the best working pair, given our priorities.

There is one last wrinkle. Networks are complicated, and sometimes things go missing. As a last protection against this, one side of ICE (conventionally the person making the call) is nominated as the "controlling" peer. The controlling peer gets final say in which pair is actually nominated to be used. So once we have a pair we like, the controlling peer sends a request again with a "use-candidate" value to tell the other side that this is it, the other side responds back "ok" so everyone knows we're all clear and that communication on this pair still works. At this point we're finally done with ICE and we have a pair of ports that can be used to talk from peer to peer.

Back to WebRTC

Ok, that was pretty in-depth. Let me remind you where we are here: I'm looking in my WebRTC debugging tools and I'm seeing that the connection object isn't getting any data, and the candidate pair chosen to represent that connection isn't getting any either. But there is a candidate pair that is getting data, it's just not the right one!

So my first thought on how this could happen is that maybe there's a disagreement between Movim/Chrome and Asterisk on which of them is the controlling peer! That would also make sense why inbound calls work, because the caller is always the controlling peer. So if Asterisk thought it was the controlling peer in both cases, then Chrome would agree on inbound calls, but disagree on outbound calls. It felt pretty right. There is a section in the ICE standard on how to solve a situation like this, but maybe it wasn't being followed properly. Here's the problem... this is a pretty internal detail of the implementation of ICE in these two pieces of software. I didn't want to take our production Asterisk server down and add a bunch of logging here, maybe even breaking it in the process. And like I mentioned before I wasn't excited to rebuild Chrome just to test this. I spent a little bit of time looking to see if there was some implementation of ICE that was simpler that Chrome that I could use as a stand-in and be more free to make quick changes to, but nothing jumped out that wasn't going to be hard to adapt to my actual use-case. I lamented: I don't even need logs, what I really wanted was some way to see the data they're sending back and forth without modifying anything. Oh... wait a second...


At this point it became clear that I had been working in web and other special areas for too long. I had been searching for so long for a way to inject logging statements into this flow somehow so I could see what was on the network, when I should have immediately reached for Wireshark. It had previously been a tool in my toolbox, but I hadn't touched it since everything became all-web-all-the-time. Wireshark is a program that just records all of the packets your computer sends and receives, and shows them in an interface that makes it easy to filter, search, and inspect. I didn't need the programs to log what they thought they were doing, I could inspect what they actually did and follow along that way! What's even better is that Wireshark already knows about STUN and TURN, so it can show me the different fields without me having to know how to unpack the bits from the packet myself!

screenshot of Wireshark UI

See here how I can search for "stun" and it'll only show me the packets for STUN? Also notice that I can expand the "STUN" attributes, because it knows about them, and see in plain terms "this is a binding response" and my IP, and also the tiny diamond on the left shows the corresponding request that this response is to. Very handy stuff. Much better than logging.

If you remember what I mentioned before, ICE has bits in the standard that try to correct for the situation where both sides think they're the controller. In order to do that, they declare on each request whether they're making it as the "controlling" or "controlled" party, which means it was easy to figure out if each side thought they were the controlling peer. Sure that this was it, I looked; everything looked like it was to spec. Fuck.

Ok, if Asterisk knew it wasn't the controlling peer, why wasn't it using the candidate pair the controlling peer was nominating? Now that I had real data that was being exchanged, I could start going from packets, to standard, to code, and back, to try and trace out how everything was actually working. After tracing around for a while on how I expected the flow to progress I found a real problem!

    char address[PJ_INET6_ADDRSTRLEN];

-   if (component < 1 || !ice->comp[component - 1].valid_check) {
+   if (component < 1 || !ice->comp[component - 1].nominated_check) {

-       pj_sockaddr_print(&ice->comp[component - 1].valid_check->rcand->addr, address,
+       pj_sockaddr_print(&ice->comp[component - 1].nominated_check->rcand->addr, address,
            sizeof(address), 0), 0);
-       pj_sockaddr_get_port(&ice->comp[component - 1].valid_check->rcand->addr));
+       pj_sockaddr_get_port(&ice->comp[component - 1].nominated_check->rcand->addr));

 /*! \brief Destructor for locally created ICE candidates */

This code was toward the end of the ICE session where we're taking the result of ICE and preparing to return it to the main code. Every time we get a response to one of our checks it's marked as "valid", and that valid_check field is updated when a new check is found to be valid and has a better priority than what's stored there. That way by the end we will have the best priority valid check easily in reach. Also, any time the controller nominates a candidate we do a similar thing and store the result in the nominated_check field. But here, at the end, we're not using the best nominated check, only the best valid check. That will just work so long as the best valid check is nominated, which is expected by the standard, but not technically required and also not what I was seeing. This is great! Finally something that explains what I was seeing. Asterisk sending to a candidate that wasn't nominated.

So I deployed that and confidently ran it. Still didn't work. That was disappointing.

Ok, back to Wireshark looking for other weird things. Paying closer attention to the actual flow I was seeing, I noticed that a request would go out for one pair and I'd get the response properly. But that's it. If we remember the standard, there's supposed to be an immediate request in the opposite direction to test that direction, but now that was looking I only saw responses to my requests, and never the requests originating from Asterisk. This is important because the ICE negotiation isn't fulfilled until the other side makes these requests. From our perspective only one direction works, and it looks like we can't actually talk to Asterisk using the same channel Asterisk can talk to us on. So we keep trying to nominate a pair, but never receive the expected opposite request to confirm to us that this pair is good, so we try again, and again, etc.

This gave me an area in the code to look at, at least. What's worse, this code actually lives outside of Asterisk's codebase, in an external library made just for Asterisk called pjproject. After some investigation and comparing to the standard I noticed this section:

 *  Triggered Checks
 * Now that we have local and remote candidate, check if we already
 * have this pair in our checklist.
for (i=0; i<ice->clist.count; ++i) {
    pj_ice_sess_check *c = &ice->clist.checks[i];
    if (c->lcand == lcand && c->rcand == rcand)

/* If the pair is already on the check list:
 * - If the state of that pair is Waiting or Frozen, its state is
 *   changed to In-Progress and a check for that pair is performed
 *   immediately.  This is called a triggered check.
 * - If the state of that pair is In-Progress, the agent SHOULD
 *   generate an immediate retransmit of the Binding Request for the
 *   check in progress.  This is to facilitate rapid completion of
 *   ICE when both agents are behind NAT.
 * - If the state of that pair is Failed or Succeeded, no triggered
 *   check is sent.


So this comment specifically references part of the ICE standard, section, in deciding when to send these triggered checks back with the same candidates after receiving a check from the other side. The problem is that it's actually wrong, the standard actually says:

If the state of the pair is Failed, it is changed to Waiting and the agent MUST create a new connectivity check for that pair (representing a new STUN Binding request transaction), by enqueueing the pair in the triggered check queue.

So in their implementation if I've already tried and failed to contact you with a candidate pair, and then later I get a request from you with that pair, I should ignore it. The standard, though, says I should instead try again on that pair, since it may have just started working. That's kinda weird though. If there was an occasional failure and it mysteriously didn't work one in every hundred calls, maybe this would be to blame, but this failed consistently. Every time. What's up? Well, a good place to start is how things end up in the failed state. The code had a timeout hard-coded where it'll retry each request 7 times, with one second between attempts, before deciding the pair doesn't work. That makes some sense, but looking at my Wireshark session I noticed something important.

The way Jingle works, when Romeo clicks the "call" button it sends a request to Juliet's device saying "I'd like a call please", and then it starts gathering candidates and sending them over. Juliet's device shows an incoming call screen or something to ask if she'd like to pickup, and when she answers yes she sends her candidates back to Romeo so they can start negotiating. But in this case we're not calling Juliet's device, we're calling Asterisk. The way Asterisk handles this is that it starts ringing on the phone network, and to speed things up in the meantime it starts gathering candidates and builds an ICE session with Romeo's candidates. This means the ICE session has already started before the other person's phone has even started ringing! Then, if the other person accepts the call, Asterisk will send its candidates down to Romeo with the session acceptance, so Romeo can start his ICE session with Asterisk's candidates.

That means that while the phone has been ringing, Asterisk has already been trying Romeo and finding no one's responding (because Romeo hasn't started his ICE session since he hasn't seen any candidates yet). So after 7 seconds Asterisk decides the candidates don't work. Later, when the call has been answered, Romeo starts ICE and starts sending out ICE messages and gets responses, but doesn't get any triggered checks, so he assumes there's something wrong with his responses and that the channel isn't actually working. So he keeps trying to get through and nominate things, but it never works. Ok!

So, assuming that's our problem, there are a few ways to fix it. The first thing we could do is change the ICE triggered checks to be in line with the standard, so it would retry the failed checks when things actually start on the other side. Another way we could fix it would be to change the way ICE works for Jingle in Asterisk and only start the ICE session once we've sent the call acceptance back to the caller. That way both sides will be starting their ICE around the same time, and so they'll likely line up and actually agree on something. The problems were that the first solution involved changing code in pjproject, which was annoying, and the second one was a somewhat involved change to how Asterisk's Jingle integration worked. Instead I opted for a worse, but far easier, solution which was to simply increase the timeout to 45 seconds. This dodged around the problem by assuming that Asterisk wouldn't have considered that candidate failed by the time the person actually answers the call. That way it'll still send the triggered check, and all will be good with the world.

So I rolled that out, and it actually worked! We plan to return to this issue and build a real solution, but for now it allowed us to keep testing.

Mission Accomplished

Mission Accomplished banner on US Navy Ship

So, I now had Movim successfully and consistently making calls out to Asterisk, and thus to real humans' phones. I told my coworkers I had done it, I had found the problem and fixed it, and all was now well. So we tested it with Conversations, the Android client that we expect many of our users to use. Nope. Just as broken as ever. Ok, maybe I was a little hasty... What about Gajim, another desktop client? Busted. Ok, what about Movim on Firefox, where I started before switching to Chromium just for its dev tools? Totally broken.

Ok, so... maybe there was still a ways to go... Don't worry, though, dear reader. The next fixes won't take as long to explain.


I started with Gajim. It's not more important than the others, but when I was testing I noticed Gajim actually printing a useful looking error message in its error log. That's a very nice place to start! It also wasn't mad about the ICE stuff at all, but before that. When the clients are back at the beginning of the setup they negotiate what kind of data is going to go over this call. Is it video, or audio, or file transfer? If it's audio, what format of audio is it, is it stereo, what levels of quality does each side support? These kinds of things, where the two clients are trying to come to a consensus on how we're actually going to go about transmitting audio data, once ICE figures out the connection itself. In Jingle the way this works is that during session initiation you can specify the kind of content you want in this session, or if you want to add content to an existing session you can add new content to a session. From then on we can talk about that content by its name and the person who created, "initiator" for the person making the call, and "responder" for the person who has been called. That's mostly there to prevent a case where Romeo starts a session with Juliet and then they both propose a new audio stream at the same time, and then each side thinks they're negotiating about their own proposal, when in reality there are two proposals. With the creator it becomes clear, they both are talking about their own audio content, and further negotiation is required.

That being said, the code in Asterisk seemed to feel it was always the creator of the audio content. The code was written to send a creator of "initiator" for an inbound call, and "responder" for an outbound call. For inbound calls, Gajim agreed; Asterisk would propose a new session with audio content, and so they were the creator. For an outbound call, though, Gajim would do the same and propose a new session along with audio content to go along, but Asterisk would respond back about an audio stream that Asterisk itself had created. But it hadn't created one, Gajim did. So I made it so Asterisk just always used the "initiator", which seems to match how clients actually established sessions. Incidentally, the reason I didn't notice this before is because Movim doesn't care about the creator field and also just assumes "initiator".

So after that change it now worked in Gajim!

Firefox and Logging

Movim on Firefox was harder to debug. When I was looking at Wireshark, I just saw... nothing. There was occasional single things that would go out, but basically it looked like ICE wasn't doing anything. That made it hard to debug...

By now, though, I'd found where ICE actually lives in the code, specifically over in pjproject and not the main Asterisk code. I'd gained some experience reading that code and working over there, and while I was reading that code I noticed some parts of the code did actually already have logging code. If I could just figure out how to turn it on, it may tell me more about what the code was thinking. I'm embarrassed to say it took me a good while to figure out how to get those logs turned on.

I eventually found some forum post somewhere outlining the simple steps I needed to use to enable logging of the data I wanted:

# First we get to the asterisk command shell
$ sudo asterisk -r
# Then we add a new "logging channel" that will include debug logs
> logger add channel some_filename notice,warning,error,debug
# Then we set core (that is, Asterisk) to log up to debug logs
> core set debug 5
# Then we set pjproject to also log debug logs
> pjproject set log level 5
# And then tell pjsip (not sure why it's not pjproject) to actually log
> pjsip set logger on

Not sure why I didn't just guess all of that...

But anyway, now I could run my tests and it would log out to /var/log/asterisk/some_filename! I will admit, it is nice that I wasn't filling the normal log files with junk and could actually see only my test, rather than wandering through days of logs looking for my portion.

When I was done, I could do the reverse (also from the asterisk command shell):

> pjsip set logger off
> pjproject set log level default
> core set debug off
> logger remove channel some_filename

This would stop putting new logs in my some_filename file, but wouldn't delete it. This is also convenient because I could now search through this file without it getting infinitely longer, or filling up with logs I wasn't interested in.

That being said, even the short file for a test that takes a minute can have thousands of log lines, so it still takes some sifting to find the actual information I'm looking for.

I noticed a few important things looking through the logs. The first, and most obvious, thing is that it builds an ICE session many many times. It'll build one, tear it down, build another one, then tear that down, within a second. This made it hard to follow the history of a single session, but was also very obviously something that might be a problem. The second issue I noticed is that all of the sessions got to a point where they said "Resetting ICE for RTP instance", but some of them said "Nevermind. ICE isn't ready for a reset" afterwards, and then things didn't seem to work after that. All the broken ones had "comp_id=2", which meant they were for the second component. Comparing the same logs with Movim on Chrome, there was no second component. Huh.

So what is a component? ICE has a section in the spec for negotiating multiple independent ports in a way where either they both work or the whole negotiation fails, which could be used by applications which need multiple ports to work in coordination for anything to work. The protocol that WebRTC offers for audio is called RTP (Realtime Transport Protocol), which has two modes of operation. Originally RTP had two connections, one where it would send the audio data, and another called RTCP (RTP Control Protocol) where it would send information about how well the audio was sending so the participants could adjust their quality or something. A later version of RTP added an optional feature called rtcp-mux, which allowed the sending of the RTCP information along the same connection as the audio so we only need one connection, and so only one ICE component. Well, when WebRTC was standardized it was decided that WebRTC required the RTP implementation to support rtcp-mux in order for it to be allowed as part of WebRTC. So in Chromium they take advantage of that and just assume it supports rtcp-mux and only start ICE for one component. Firefox, though, felt it was important to be more backwards-compatible and tries to support both rtcp-mux and traditional RTP+RTCP modes. There's a way to tell if the other server supports rtcp-mux, but that information is sent when the other side answers the call, and by then Firefox has already sent all of the candidates for both components.

Ok, so that's why Firefox acts differently than Chrome, but why is it a problem? Surely it should be fine to negotiate two components and just ignore one, and ICE should work fine in either case. Well, that comes down to the constant building and rebuilding. The way original ICE works, the full set of candidates is gathered by both sides and then exchanged, they're all processed, and then a winner is picked. Jingle, though, made a change where it would send each candidate as it discovered them. That way the ICE session can start sooner and can be looking for candidate pairs while the STUN and TURN stuff is going on, and also if a candidate is found right away it might be decided it's not even worth it to get a TURN candidate since it'll be lower priority than the valid pair we've already found. That method of operating eventually got its own draft standard under the name Trickle ICE, which is similar but slightly different, which means there's a new version of Jingle that's meant to bring Jingle up to date on new versions of ICE including Trickle ICE. It's a draft, though. All that is to say, things are kind of a mess, and the version of ICE that Asterisk supports right now is not the Trickle kind. That's a feature that's being worked on for the future, but it's not released or supported by the XMPP integration at the time of this writing.

So, if Jingle uses Trickle ICE, but Asterisk doesn't support that, how does the XMPP integration in Asterisk work? Well, every time it sees a new candidate trickle in, it just restarts the ICE session as though that candidate is the only one. It's not ideal, but it appears to work a surprising amount of the time! That's likely owing to ICE's flexibility when it gets a request from a pair it doesn't know about, how it just adds it to the list. As long as one side knows all the candidates, the other side should respond properly and come to a consensus. Weird, but fine. Maybe in a newer version of Asterisk that will get better. But in this case Firefox has two different components, and it sends the "component 2" candidates after the "component 1" candidates. So Asterisk will get one candidate and will setup an ICE session, and then immediately gets the component 2 candidate so it tears the first one down and sets up the new one. But this new one only has candidates for component 2. That's something ICE doesn't allow, so Asterisk's ICE session isn't in a "nominating" mood, so it just doesn't do anything. That's why Firefox doesn't appear to be able to negotiate a connection.

So now that we know the problem, how to fix it? Well, we could implement a sketchy version of Trickle inside Asterisk, or even just inside the XMPP integration code for Asterisk. That looked like it was likely to introduce new bugs, and I knew that someone was already working on a real version of Trickle that may just work later. And even though it's nearly broken, it also works in Chromium every time I tested it. I really just wanted Firefox to work the same way Chromium does, so that's what I built. I just put some code in the XMPP integration part of Asterisk that ignores candidates for component 2. I know they're not going to end up using it either way, because all of our clients support rtcp-mux, and this is basically one line of code that is highly unlikely to introduce any new bugs!

Now Movim on Firefox works too! Three down, and one to go!


This was it, our most supported client, and if it didn't work here we couldn't really call it a feature. And it still isn't working after all those other fixes. It's hard to use Wireshark because the app is running on my phone, but I now have the power of Asterisk logs at my disposal! Using that I took a look and... everything looked pretty good. Candidates were being exchanged, and negotiated, and being chosen as valid and then... just sitting there. None of the valid options were ever being nominated, so both sides were just waiting until someone gets bored of waiting and ends the call. I decided that this was really interesting, but I might need Wireshark to see more. Great.

There's probably some way I could have networked my phone through my computer or something to use Wireshark, but if I found that I needed more logging or something it would probably be useful if I could build the code for the Android app. And if I can build the code, then I can just run the Android Emulator which lets me run the app on my computer in a fake phone environment. So I pulled down the code for Conversations and got it setup and working in Android Studio. Now it was as easy as clicking the "play" button and running Wireshark to watch the packets the virtual-phone was sending. Since I already knew what to look for from the Asterisk logs, it was pretty easy to look at the stream of packets and see that neither side was including the "use-candidate" attribute which, if you remember, is how the controlling side tells the controlled side which candidates we're going to go with. That would explain why inbound calls work with Conversations, because for inbound calls Asterisk is the controlling side, and includes that "use-candidate" value so we're all on the same page, but when the roles are reversed and Conversations is the controlling side it never nominated anything. That's weird. Unlike with the Firefox side, it's not like it was doing nothing; it was definitely making requests to find a list of valid candidates. And unlike the original Chromium problem triggered checks were coming back just fine, and both sides appeared to know what the valid list is. After combing through packets, I did manage to find one weird thing. Wireshark knew about the attributes in STUN and could pull those values out of the packet and show them to me, but some of the packets around the time I might expect nomination to start had some extra attributes. Values Wireshark didn't recognize and listed only by their type, "0xC001". It's possible it was nothing, but it was something the working implementations didn't do, so it was a thread I had to pull on.

I pulled down the code for libwebrtc, the WebRTC implementation Conversations used, and searched through the code for that value and saw it was associated with STUN_ATTR_NOMINATION. Continuing to trace through the code for where this value was used, and which sets of conditions lead to that code, and so on, I eventually found that there is an option libwebrtc supports called "renomination". I manage to find what could be called a standard only by the most generous definitions, ICE Renomination: Dynamically selecting ICE candidate pairs. This document doesn't actually mention which STUN attributes to use to do the nomination, saying only "we define a new STUN attribute", but the code I was looking at seemed to line up with the intent of the document, at least.

The intention of this standard is to make it easier to control which pairs get nominated by ICE. In base ICE we test to find valid pairs, then some are nominated, and then the nominated pair with the best priority is chosen. Renomination is trying to make it easier for the controlling side to change its mind, for example if a WiFi device goes out of range during negotiation, making the cell network candidates better choices, or something. I'll be honest, I'm not really sure how often this would really apply, because ICE doesn't run for very long. Either way, though, the way this proposed extension handles this "mind changing" is by replacing the "use-candidate" attribute, which is either present or absent, with a different "nomination" attribute that contains a number. This number is more important than the priority, so the controlling side can nominate one pair with value 2, and then later nominate another pair with value 3 and that will now be the best choice no matter what priorities say. So, that's what the "unknown" attribute I saw in Wireshark, and also why no candidates were ever chosen. Conversations was sending one attribute to nominate, and Asterisk was waiting for a different one.

But why? There's a part of the spec that says that renomination is only turned on if both sides include the renomination option in their initial candidate exchange. Well, Asterisk definitely doesn't support it, so what's up? Looking at the Conversations code it looks like the issue is that there's no room in the Jingle standard for exchanging whether or not someone supports an option like this. There just isn't a value for that. So in order to allow it, Conversations just assumed all call partners supported it and included that value in both sides of every session it setup. It also seems like libwebrtc isn't very picky about this. This means if a user running Movim on Chromium is being called by Conversations, Conversations will think both sides supports renomination and Movim will think neither side does. But despite this, when Conversations sends the special attribute instead of the "use-candidate" attribute, Chromium still understands it and doesn't even check if it should expect that kind of value from this session. It just knows what was meant. So this is why Conversations works with Chromium, but any standard implementation of WebRTC that didn't support the draft extension of Renomination would not work. Like Asterisk.

To fix this I did two things. The first thing I did was talk to the Conversations developer about how we can be more careful about when we include this option, and was told we could just never include it. They just didn't have strong opinions on it. So I made a patch to remove that option. That was the easiest fix I've had to make yet. The problem is that unlike the changes I'd been making to our Asterisk server, this code ran on our users' phones. We couldn't control when it would be released, and we couldn't control when our users would install the new version even after release. Since then, that change went out in version 2.9.8, but we didn't know that at the time, and wanted this to work as soon as possible. So, like the Firefox fix, I considered building an implementation of renomination in Asterisk, but after looking at my Wireshark it didn't seem like it was necessary.

Since it was just a stopgap measure until our users migrated to newer versions of Conversations I did the simplest thing that would work, which is to just treat the "renomination" attribute (0xC001) exactly the same as the "use-candidate" attribute; ignoring the number, and the purpose of the extension, entirely. This would break if Conversations ever actually tried to renominate a candidate to a new pair with lower priority, but how likely is that? Probably not very likely. And if they only ever nominated once, or tried to renominate to a higher-priority candidate, then it would just work! The only quirk here is the one I mentioned earlier, which is that ICE is actually implemented in a library to Asterisk called pjproject. I really didn't want to fork that library and cut a new release just for myself with my garbage change in it, solely so I could put it in our Asterisk implementation. Luckily this apparently isn't the first time Asterisk has needed to make some tweaks to pjproject, so there was already a system for that! If I made a git commit in the pjproject repository, and then put the diff from that commit into a folder in a magic location in the Asterisk repo then during the Asterisk build it will make that change to the library for me before building! That meant I could still use an official release of pjproject, and could keep all of my ugly changes together in the Asterisk codebase.

I tested both changes, and they both worked. If the client didn't specify "renominate" as an option, then it spoke standard ICE and would work without my change. And if I left the client as-is it would work with my altered server that pretended to know what renomination was. The phone calls went through, and Conversations worked! All the clients worked!

Somehow Wrapping Up

Well, if you're at the end of this, thanks for following my on my odyssey. We began at the banks of "this is 90% working, there's probably just one weird typo to fix to get it the rest of the way". From there it was down the river of standards and the rapids of interoperability of standards between browsers and old C projects, aided all along by my compass Wireshark. I was confident when the rapids calmed down and the one problem was fixed, but there was a world of trouble up ahead as I didn't see the waterfall coming. My raft came out the other side, a little battered but with things holding together. Calls were being made, and metaphors were being thoroughly stretched. Seriously, why would I need a compass to raft down a river? IT ONLY GOES ONE WAY.

All in all, none of these fixes were "the proper fix", but with them I got the system working. I plan to engage the Asterisk community to open discussions about the actual issues, rather than just producing a patch no one wanted, but in the meantime calls are being made and received. And I got to stretch my legs and learn a bunch of inner workings, which is my interest. I even got a blog post out of it...


  • Mission Accomplished banner by U.S. Navy photo by Photographer's Mate 3rd Class Juan E. Diaz. (RELEASED) - Source, Public Domain

by 0 at August 17, 2021 14:51

August 16, 2021

Monal IM

Monal 5.0.1: Synchronized builds and bugfixes

We have released Monal in version 5.0.1 which contains mostly corrections and small improvements. Now the iOS and macOS builds are also synchronized and available in the Apple App Store.

Here are the changes in this release:

  • Show warning if camera permissions are missing while trying to use camera
  • Fixed duplication of contacts in chat overview
  • Fixed some crashes
  • Show Debug menu after tapping 16 times onto app version
  • Don’t drop file download errors silently
  • Don’t log outgoing SASL and password change stanzas (your password won’t be logged anymore)
  • Trim whitespaces and newlines at the beginning or end of a message
  • Fix microphone icon not always showing
  • Renamed “Log” to “Debug” in settings menu
  • Move contact details close button to the left
  • Fix some very rare TCP stream handling bugs
  • Fix old XMPP resources created with Monal older than version 4.3 not having a random part
  • Fix bug in upload queue not reacting to enter key
  • Privacy: Only register to APNS and push appserver if notifications are allowed
  • Fix bug in Message Archive Messaging (MAM) handling with ejabberd

Monal is being developed entirely in spare-time of the named developers and has no commercial interest – for the freedom of software but also to enable ad- and tracking-free communication. Therefore, we kindly would ask you to consider to make a donation.

If you have hardware to donate please reach out to us first! Last but not least, you can check out the new features and give us feedback, that helps a lot to improve the app. Read about how to support!

Finally, do you know someone you could imagine to volunteer in support visual design and improvement of the app’s interface? Then please reach out to us as well.

Spread the word! We have this blog but also a Mastodon account, Twitter account and you can read this via the Planet Jabber RSS feed.

Development is conducted via GitHub.

Let’s change the digital communication via XMPP in the Apple environment, together!

Your Monal IM developers!

by emus at August 16, 2021 20:04

August 15, 2021

Peter Saint-Andre

Meditations on Bach #7: Aristotle and Bach

On pp. 169-174 of his book Bach: The Learned Musician, Christoph Wolff describes the genesis of Bach's musical thinking. Of particular interest to me is his recounting of some insights from Johann Nikolaus Forkel, who founded the field of musicology and wrote the first biography of Bach in 1802. Wolff writes as follows....

August 15, 2021 00:00

Aristotle Research Report #16: The Sources of Beauty

Aristotle uses the word καλός in both an aesthetic sense and an ethical sense. This has caused confusion among translators and commenters alike. Should the word be translated as "beautiful" when talking about art but as "right" or "fine" or "noble" when talking about character, intention, and action? Did Aristotle think that works of art were inextricably tied up with morality or that traits of character were aesthetic in some way? Let's look into the matter....

August 15, 2021 00:00

August 14, 2021

Peter Saint-Andre

Meditations on Bach #6: Five Strings?

Although the first five of Bach's suites for unaccompanied cello lie quite naturally on the bass (when tuned in fifths, that is!), the sixth suite in D major (BWV 1012) is a slightly different story because it was originally written for an instrument with an added string above the usual four. The exact identity of this instrument remains a mystery - some think it was written for a viola pomposa or viola de spalla, others for a violoncello piccolo (not that any of those instruments are well-understood). Whatever the truth of the matter, playing music written for a five-string instrument on a four-string instrument introduces new challenges: in particular, it requires intricate playing high up on the fingerboard. Modern cellists try to overcome this challenge through heavy use of thumb position, an innovation that post-dates Bach's lifetime; however, that doesn't make the task much easier. While working on the prelude to the sixth suite, I've realized that playing it on a five-string electric bass would make a lot of sense. Ideally such a bass would be tuned in fifths with a high E string, C-G-D-A-E. This seems achievable by using strings for a six-string bass and discarding one of the strings; for example, La Bella makes a six-string set normally tuned B-E-A-D-G-C and I would tune B up to C, discard the E string, tune A down a step to G, keep D as-is, tune G up a step to A, and tune C up two steps to E. After conferring with Marek Dąbek of Stradi Basses on whether the high E string will work, I'm happy to report that we're transforming the "Mocha 4" into a "Mocha 5"....

August 14, 2021 00:00

August 10, 2021

Ignite Realtime Blog

JSXC Openfire plugin 4.3.1-1 released!

The Ignite Realtime community is happy to announce the immediate availability of version 4.3.1 release 1 of the JSXC plugin for Openfire, our open source real time collaboration server solution! This plugin can be used to conveniently make available the web-based JSXC client (a third-party developed project) to users of Openfire.

The upgrade from 4.3.0 to 4.3.1 brings a small number of changes from the JSXC project which appear to be mostly bug fixes and small improvements. Please review the changelog for more information.

Over the next few hours, your installation of Openfire should automatically detect the availability of this new release. Alternatively, you can download the new version right now from the plugin’s archive page.

If you’re interested in engaging with the community that builds Openfire and its plugins, please come join us in our forum or chat room!
For other release announcements and news follow us on Twitter

7 posts - 2 participants

Read full topic

by guus at August 10, 2021 08:20

August 07, 2021

Peter Saint-Andre


This piece of light verse popped into my head the other day....

August 07, 2021 00:00

August 06, 2021

Erlang Solutions

Why Build A Bank In Elixir – Memo Bank’s Story

Elixir is a programming language that runs on the BEAM VM – the same virtual machine as Erlang – and can be adopted right throughout the tech stack. Elixir is designed to combine Ruby’s familiar syntax with the guaranteed performance, scalability, and resilience of Erlang. When choosing a programming language for software development within FinTech and financial services, uptime and fault-tolerance are of mission-critical importance and this is where Elixir will often be the right tool for the job.

Here we take a look at the success story of Elixir in FinTech with Memo Bank, the first independent bank to be created in France in the last fifty years, that has just completed a new fundraising round of €13 million. 


‘The kind of bank we wanted simply didn’t exist; so, we decided to build it.’

This was the ambitious starting point for the genesis of Memo Bank, we will examine why and how they chose to build a fast, innovative, and secure banking system from scratch using Elixir as the programming language of choice.

Memo was founded in 2017 and serves the European small and medium businesses (SMB) market, helping them to manage cash flows and fund their growth as a bank ‘designed by business people for business people’. The French based bank provides all the services you’d expect from a business bank, from current accounts to credit lines. 

Why Memo Chose Elixir

There are 2 wonderful first hand accounts detailing the planning and architecting stages of the early days of Memo which you can find on their Medium profile. This section summarises their story.

As Jérémie Matinez explains in his post ‘Why Elixir? An alchemy between backend and banking’, the advantage of building from the ground up, was that they could incorporate the most efficient procedures and modern technologies into their systems from the very start. The guiding principle was that, when it comes to banking, data (its accuracy, accessibility and security) were of paramount importance. To comply with financial services regulations and for customer trust, Memo needed to build a system that is always available anytime, from any device. Combined, these mission-critical requirements led to the decision to adopt Elixir for the core banking system and all of the other backend applications. 

Memo was particularly attracted to building in Elixir to leverage its immutability for ease of development and easier concurrency and testability, necessary when needing 100% reliability for a FinTech system. They also pinpointed the language as being able to reach the level of system scalability and availability needed to absorb real-time transactions.

What else did they like?

The good trade-off between performances and high level features

Memo found that Elixir offered them the perfect balance between performance and features to provide them with the reliable structure to facilitate their ambitions at scale and provide high availability for reliable real-time transactions.

Being built upon Erlang and targeted to run on the BEAM virtual machine, Elixir shares its performance and ecosystem advantages and concurrency model as standard. ​Also, Erlang is part of the exclusive club of the nine nines languages which makes it one of the most available and reliable platforms out there.

The growing and solid community

The Elixir developer community is very friendly and welcoming and the ecosystem is growing all the time. Memo’s team found that although it is a relatively new language, it does enjoy the availability of any tool that is written originally for an Erlang codebase. They also mention some of the extremely powerful frameworks that allow full-stack development, most prominently Phoenix Live View for web application development. 

Join us and the rest of the European Elixir community September 9-10 at ElixirConf EU hybrid conference in person in Warsaw, Poland or online.


Elixir is built on top of really mature and stable tech (Erlang and the Beam VM) with good documentation, which is a big plus point for any startup looking to grow trust within their customer base. It is used in many other successful FinTech companies such as Klarna and SolarisBank making it a battle-tested language for innovation in the industry.

There are some major positives to opting for Elixir which meets the requirements of startups and scaleups in the space:

  • Scalability  – If you reach your goals of millions of users, then you’ll have benefited from the reliability and scalability of the BEAM VM
  • Concurrency – handling a lot of simultaneous requests reliably such as from spikes in transactions
  • Easy to develop and maintainable – ease of usage and fast development for everything from fixing bugs to adding new features

Overall what you gain from using Elixir for your FinTech project is a smaller more manageable codebase that is faster and significantly more reliable when compared to systems built in other programming languages.  

We worked with Bleacher Report, the leader in providing real-time, social-first, mobile-first sports content, to help them move their system from Ruby to Elixir. We achieved a 10x reduction in the time it takes to update the site for a system moving from 150 servers to just 8. Check out the case study here. 

‘They were able to come in with their expertise, help us establish best practice and give us confidence that going forward our systems would be efficient and reliable.’

Dave MarksSenior Engineering Director @ Bleacher Report


Overall the Memo Bank system uses modern tools and processes and is designed for speed. The transactional core of Memo Bank is now fully powered by Elixir to deliver on its mission of maintaining customer account records with the highest possible availability and reliability. The Core Banking System is easily adaptable to new customer needs and produces accounting and regulatory reporting in real-time.

What this success story shows is that whether you need to start from scratch or you already have an infrastructure to integrate with, Elixir is a proven, sound technological choice to build software that will adapt to your business and stand the test of time and scale.

The Erlang Solutions team has worked closely with the Elixir core team since its inception. Whether you’re new to Elixir, looking to grow your team, add new functionality or integrate with a new system, we’re here to help you make it happen. Tell us about your project requirements here.

The post Why Build A Bank In Elixir – Memo Bank’s Story appeared first on Erlang Solutions.

by Michael Jaiyeola at August 06, 2021 11:16