Planet Jabber

September 17, 2021

Erlang Solutions

FinTech Matters newsletter | September 2021

Subscribe to receive FinTech Matters and other great content, notifications of events and more to your inbox, we will only send you relevant, high-quality content and you can unsubscribe at any time.

Read on to discover what really matters for tech in financial services right now for the Erlang ecosystem and beyond.

It’s back to school season following what was a disrupted summer for most, but one where the FinTech world has continued to innovate and grow – global investment in H1 reached $98bn (£18bn in the UK) and Revolut raised a funding round of $800m at a valuation of $33bn. 

Michael Jaiyeola, FinTech Marketing Lead

[Subscribe now]

The Top Stories Right Now

Study To Investigate The Impact Of Open Source Software On The EU Economy

This detailed report from the EU examines the technological independence, competitiveness and innovation around open source software. The main breakthrough of the study is described as being the ‘identification of open source as a public good’. The value of open source technologies for many modern industries is well recognised but financial services have lagged behind somewhat. It is true that in highly regulated industries the compliance requirements may require some extra work when it comes to open source, but the idea that to be successful you must build using proprietary technology is finally being dispelled. A report at the beginning of the year by forecast The Open Source Service Market to grow by 24 % by 2025. 

Communities like those of Erlang and Elixir offer collaboration and information sharing to raise standards for all – it’s not about free software. Where financial services infrastructure leverages open source technology that meets shared requirements then individual companies can focus on adding differential value to their products and services. For FinTech faster innovations while driving down development costs and speed to market is the holy trinity and open source technology enables this in the right use cases where being agile, responsive and scalable will determine competitiveness and success. 

Get the report

Solarisbank raises $224M at a $1.65B valuation

Solarisbank, the tech company with a banking license, (whose platform is built using Erlang and Elixir) will use the new funding to acquire Contis and expand API-based embedded banking tech in Europe. Solaris was one of the first to be a fully-fledged bank that offers Banking-as-a-Service which is one of the FinTech segments (along with embedded finance) that has thrived over the pandemic period.

Read more

FCA loses £300k worth of electronic devices

In a do as I say not as I do comedy own goal, the FCA has misplaced a total of 323 electronic devices (estimated worth £310,600) over the past three years, according to a freedom of information request.  The devices are predominantly made up of hundreds of laptops, tablets, desktops and mobile phones reported lost or stolen by FCA employees. Unsurprisingly, this raises questions about data protection standards at the industry regulator. 

Read more

Verizon and Mastercard partner to bring 5G capabilities to payments

The strategic aim is to integrate 5G into payments focusing on contactless shopping, checkout automation and Point of Sale (POS) experience solutions. It is stated that this will be achieved by harnessing the latest in IoT technology alongside real-time edge computing.

Read more

More content from us

Kivra – Nordic FinTech case study for digital document sending platform

Memo Bank’s story  – How they used Elixir to build a bank from scratch

State of play in FinTech – I take a high-level look at some of the industry trends of 2021 so far

Kim Kardashian’s cryptocurrency Instagram post – the ‘financial promotion with the single biggest audience reach in history’!

When ultra influential influencers meet newly developed tokens, what could possibly go wrong? Well potentially plenty according to the head of the FCA, Charles Randell, who called Ethereum Max (nothing to do with the Ethereum platform) ‘a speculative digital token created a month before by unknown developers’. Read more

One in four UK financial services workers want to work from home full-time

 A new survey from Accenture has found 24 per cent of the UK’s 1m financial services workers “would prefer to work entirely from home once a full return to office is possible”. Read more

Klarna joins leading climate change programmes

The Swedish BNPL unicorn is the first FinTech to sign up for The Climate Change Pledge and the Race to Zero campaign. Read more

Erlang Solutions byte size

Did you miss joining our livestream of “What’s Next for Blockchain in Financial Services’ during FinTech Week London? Well, don’t worry you can get exclusive early access to the full video of the panel debate here.

Code BEAM America – Created for developers, by developers, the conference is dedicated to bringing the best minds in the Erlang and Elixir communities together to SHARE. LEARN. INSPIRE. over two days November 4-5

Trifork Group (our parent company) reports revenue growth of 55% in Q2 and 46% in H1 2021. The Q2-2021 Interim Report can be downloaded here. In Q2, Trifork Labs continued its active investment strategy and increased investments in the new Fintech startups Kashet, a new mobile-first challenger bank in Switzerland, as well as in a joint-venture Fintech startup (Money), co-owned by three mid-sized banks. Trifork has entered an integration partnership with Modularbank, a cloud-native core banking as a service solution.

To make sure you don’t miss out on any of our leading FinTech content, events and news, do subscribe for regular updates. We will only send you relevant high-quality content and you can unsubscribe at any time.

Connect with me on LinkedIn


The post FinTech Matters newsletter | September 2021 appeared first on Erlang Solutions.

by Michael Jaiyeola at September 17, 2021 11:44

September 05, 2021

The XMPP Standards Foundation

The XMPP Newsletter August 2021

Welcome to the XMPP Newsletter covering the month of August 2021.

Many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, especially throughout the current situation, please consider to say thanks or help these projects!

Read this Newsletter via our RSS Feed!

Interested in supporting the Newsletter team? Read more at the bottom.

Other than that - enjoy reading!

Newsletter translations

Translations of the XMPP Newsletter will be released here (with some delay):

Many thanks to the translators and their work! This is a great help to spread the news! Please join them in their work or start over with another language!


XMPP Office Hours - Also, checkout our new YouTube channel!

Berlin XMPP Meetup (remote): Monthly Meeting of XMPP Enthusiasts in Berlin - always 2nd Wednesday of the month.


What is project XPORTA? As announced in the April ‘21 newsletter, the Data Portability and Services Incubator at NGI is sponsoring the XMPP Account Portability project named XPORTA. This month they host an interview with Matthew Wild about how this project came into existence.

The “have your own TelCo based on XMPP” service has a new blog, with a twist, now based on Libervia so based on XMPP, with all the nice blog features that you want (like RSS) and even subscriptions via XMPP (with compatible clients like Movim or Libervia). The post announcing the new blog also covers the new registration flow and billing system. But the previous post is the real jewel, called Adventures in WebRTC: Making Phone Calls from XMPP. It details the journey through WebRTC debugging, multiple clients, NAT, ICE and all monitored through Wireshark. Get a hot or cold beverage to go with this about 70 minutes long read.

In the previous newsletter we mentioned that Debian Linux 11 will soon be launched with updated XMPP software, as this happened in the meantime, server admins are already updating or even setting up new deployments. Such as Nelson from Luxembourg, who published a blog post about setting up a server with ejabberd on Debian 11 Bullseye.

While the Snikket iOS client app was just released, read more below, the behind the scenes development continues. In the latest blog post, Matthew Wild announces that the expert folk at Simply Secure will be performing a usability audit of the current app, as well as conducting usability testing thanks to funding from the OTF’s Usability Lab. The analysis will help to improve the UX of the iOS app and Snikket as a whole.

Missed in last month’s issue, the folks at cometchat have blogged about XMPP’s history, working architecture, stanzas and features in general in Everything About XMPP - Extensible Messaging & Presence Protocol. If you want a quick technical overview (or need one to show others what XMPP is all about) this ~15 minutes read can bring one up to speed.

“Spaces” are the new XMPP frontier to be explored, and you’d get a glimpse of them in Gajim client news below, but the work is pretty elaborate and ongoing with many people involved. Renga’s (an XMPP client for Haiku) developer pulkomandy has blogged Some random thoughts about XMPP spaces thinking about use cases (family, business, communities) and user interfaces.

Any Turkish speakers reading the newsletter? We don’t have a translation yet, but Ged has just published an in-depth blog post about XMPP titled Hangi “Chat” Programı?. In about 40 minutes it takes the reader through the story of the protocol, tells about apps, servers, comparisons with popular apps and privacy.

The March `21 newsletter brought the news that JSXC (the Javascript XMPP Client) got funding to work on group chat calls. This month they report on the work done and explain the current progress that can even be tested.

Finally, how does FaceTime work? They interestingly use the same port (5223) as XMPP does…

Software news

Clients and applications

Gajim 1.4 Preview: Workspaces. The Gajim team has been hard at work in the past months to prepare the next v1.4 release. The upcoming version brings a major interface redesign. In this post, they explain how the new interface works and what remains to be decided or implemented before the release.

Gajim Workspaces (preview)

Libervia progress note 2021-W31 is out with information about Docker integration, the translation portal and the first 0.8.0 beta. It also has plenty of details about the work done on the ActivityPub Gateway project (grant announced in the April ‘21 newsletter) with SQL, DBus, PubSub and with new and updated XEPs.

Communiqué is a new XMPP client from the Mellium Co-op team. It was announced this month and presented at the XMPP Office Hours (unfortunately recording did not work out). The source code can be found in the repository.


Monal 5.0.1 is now available for both iOS and macOS bringing mostly corrections and more polish over the previously major release.

JSXC Openfire plugin gets a 4.3.1-1 release, with mostly bug fixes and improvements from the JSXC project.

After so many months of waiting the Snikket iOS app is now publicly released. Snikket server admins can add the app to the invitations pages to have Apple users easily find it. If you are not running Snikket you can still use the app (you can use credentials directly) but do read the blog post to know what you need to add to your Prosody instance (invitations modules) or what limitations you might experience using any other server software.

Snikket on iOS


Prosody 0.11.10 has been released with a fix for CVE-2021-37601 and some minor changes. Prosody developers recommend server admins to upgrade in order to fix the remote information disclosure issue.


The Mellium Dev Communiqué for August includes updates to the Mellium XMPP library as well as the new Communiqué instant messaging client. The biggest updates this month are MAM and ad-hoc commands support! You can read more here.

Extensions and specifications

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).


The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs proposed this month.


  • Version 0.1.0 of XEP-0460 (Pubsub Caching Hints)
    • Accepted by vote of Council on 2021-07-21. (XEP Editor (jsc))


If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.


  • Version 1.21.0 of XEP-0060 (Publish-Subscribe)

    • Revert change from version 1.15.5 which changed meta-data to metadata in wire protocol. That was an unintended breaking change which has now been reverted. (pep)
  • Version 0.3.0 of XEP-0214 (File Repository and Sharing)

    • Revert change from version 0.2.1 which changed meta-data to metadata in wire protocol. That was an unintended breaking change which has now been reverted. (rm)
  • Version 0.3.0 of XEP-0248 (PubSub Collection Nodes)

    • Revert change from version 0.2.1 which changed meta-data to metadata in wire protocol. That was an unintended breaking change which has now been reverted. (rm)
  • Version 0.2.0 of XEP-0283 (Moved)

    • Re-write the flow with a more focused approach. (mw)
  • Version 1.1.0 of XEP-0429 (Special Interests Group End to End Encryption)

    • Add discussion venue after creation by the Infrastructure Team. (mw)
  • Version 1.24.0 of XEP-0001 (XMPP Extension Protocols)

    • Change “Draft” to “Stable”. (ssw)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Draft.

  • No Last Call this month.

Stable (formely known as Draft)

Info: The XSF has decided to rename ‘Draft’ to ‘Stable’. Read more about it here.

  • No Stable this month.

Call for Experience

A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

  • No Call for Experience this month.

Thanks all!

This XMPP Newsletter is produced collaboratively by the XMPP community.

Therefore many thanks to Adrien Bourmault (neox), Anoxinon e.V. community, anubis, Benoît Sibaud, emus, Sam, Licaon_Kter, nicola, seveso, SouL, wurstsalat3000, Ysabeau for their support and help in creation, review and translation!

Spread the news!

Please share the news via other networks:

Find and place job offers in the XMPP job board.

Also check out our RSS Feed!

Help us to build the newsletter

We started drafting in this simple pad in parallel to our efforts in the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. We really need more support!

You have a project and write about it? Please consider sharing your news or events here, and promote it to a large audience! And even if you can only spend a few minutes of support, these would already be helpful!

Tasks which need to be done on a regular basis are for example:

  • Aggregation of news in the XMPP universe
  • Short formulation of news and events
  • Summary of the monthly communication on extensions (XEP)
  • Review of the newsletter draft
  • Preparation for media images
  • Translations: especially German and Spanish


This newsletter is published under CC BY-SA license.

September 05, 2021 00:00

August 31, 2021


Snikket iOS app now publicly released

This is the announcement many people have been waiting for since the project began!

Opinions are often strong about which is the best mobile operating system. However, while it varies by region and demographic, wherever you are it’s very likely that you have Apple users in your life, even if you don’t use one yourself. We want to ensure that the platform you use (by choice or otherwise) is not a barrier to secure and decentralized communication with the important people in your life.

The lack of a suitable client for iOS was an obstacle to many groups adopting Snikket and XMPP. For this reason, today’s release of a Snikket app for Apple’s iPhone and iPad devices is a significant milestone for the project.

A community effort

It’s a journey that began late last year with the announcement that we would be sponsoring support for group chat encryption in Siskin IM, the open-source iOS XMPP client developed by Tigase.

The Tigase folk have been very supportive of our project, and I’d like to especially thank Andrzej for his assistance and patience with all my newbie iOS development questions!

There are many other folk who have also helped unlock this achievement. This includes everyone who helped to fund the development work - especially Waqas Hussain, the kind folk at and of course absolutely everyone who has donated to the project. The majority of donations are anonymous so it’s impossible to thank everyone individually, but the amount of support we’ve received as a project is amazing, and really gives us confidence in achieving even more ambitious milestones in the future.

Funding aside, we couldn’t have refined the app without help from our diligent beta testers - with particular thanks to Michael DiStefano, Martin Dosch, mimi8999 and Nils Thiele for their bug-catching and comprehensive feedback. Everyone participating in the beta programme has helped shape the app we’re releasing today.

What happens now?

We’ll be rolling out a Snikket server update shortly that will add a link to the iOS app from Snikket invitation pages. If you’re eager to make the app available to your users before then, you can add the following line to your snikket.conf:


After saving the file, apply the change with the command docker-compose up -d.

If you are using the Snikket hosting service, you will get an email soon that explains how to enable the app store link for your instances.

We’re not done yet

This is a big milestone, without a doubt. But we’re not completely done. The app is not perfect (yet!) and we’re still working on many things. But we believe this is no reason not to share it with the world as early as we can.

Push notification compatibility

The first thing to note (especially as many non-Snikket users will also be excited about a new iOS XMPP client on the scene) is that our primary focus has been on the app working seamlessly with Snikket servers. We’re committed to XMPP interoperability, but time and resources mean we can’t develop and test every change in pace with every XMPP server.

Although we expect it to generally work, there are some known compatibility issues currently. Specifically, due to the strict “no background network connections” policy for iOS apps, we have needed to adapt push notification handling slightly differently to what is supported on most XMPP servers today. The extensions we use are openly published by Tigase, and we have made available community modules for Prosody (mod_cloud_notify_encrypted, mod_cloud_notify_priority_tag and mod_cloud_notify_filters), and discussion has begun on moving these extensions over to the XMPP Standards Foundation standards process. We welcome help and contributions towards evolving XMPP’s current push notification support. If you’re interested, reach out!

Until then, although some backwards-compatibility considerations are in the app, this means it’s very possible you may experience issues with notifications on some non-Snikket servers when the app is closed (though Tigase servers and Prosody servers with the community modules enabled should be fine).

Language support

The app is currently only available in English, which is an unfortunate contrast from all other Snikket projects which are available in many languages already.

Updating the app to support translation of the interface is high on our priority list. After this is implemented, we will also be looking for help from translators, so stay tuned for further announcements.

Other work in progress

Other known issues that we are working on:

  • Notifications for OMEMO-encrypted messages show a potentially-confusing message about the app lacking OMEMO support. This will be fixed by the same server update that adds the app to the Snikket invitation page.
  • Group chat notifications are not yet working. This will also be rolled out as a future server update.

Of course, we will also soon be incorporating feedback from the usability audit and testing sessions when that work is completed.

I want to say a final thanks to our entire community for supporting the project. Snikket has ambitious goals, and the progress we’re making couldn’t be achieved without all the help and support we’ve received.

Drop us feedback about the app if you try it out, file bug reports and feature requests to help us with planning and, if you can, donate to help sustain the development of the entire project.

We look forward to welcoming more users to the XMPP network than ever before!

by Snikket Team ( at August 31, 2021 14:00

August 27, 2021


Gajim 1.4 Preview: Workspaces

The Gajim team has been hard at work in the past months to prepare the next v1.4 release. The upcoming version brings a major interface redesign. In this post, we explain how the new interface works and what remains to be decided or implemented before the release.

Of course, your feedback is important! No interface can please everyone, so please react to this post with how this change would impact you positively and negatively, and ideas you have to make it even better before the release.

This blog post is in part based on the Gajim 1.4 UI/UX Preview given by lovetox, a current maintainer of Gajim. So if you prefer the video format, click on that Youtube link or use your favorite Invidious instance to view it with a lightweight, privacy-friendly client. That presentation was given as part of the XMPP Office Hours programme, where you can find other interesting presentations about the Jabber/XMPP ecosystem, or propose your own!

Single-window application

The main change in Gajim’s new release is that, in the current implementation, it becomes a single-window application. We’ve been used for over a decade to have separate windows for the contact list (roster) and for chats. This user interface pattern was common with early 2000’s messengers such as MSN and ICQ.

In the upcoming release, we make Gajim a single-window application, where all features are always within your reach. This change is inspired by more recent messengers such as Element, Discord or Mattermost (among others). This is what it looks like so far:

Gajim’s new main window

Gajim’s new main window

Some people feel left out by this new feature and the removal of the multi-window mode, however we hope to reconcile our users’ needs as part of the Gajim project, as explained in the Areas for improvement section of this blog post.


Gajim v1.4 will introduce a new concept: workspaces. Previously, all tabs were considered equal as a flat list within a window. We understand the need to organize some activities into a specific context, but without multiple windows, we organize these activities by workspace.

A workspace is a collection of group chats and private chats, organized client-side. For the moment, this is a non-standard, Gajim-specific feature, but standardization efforts are explained in the Areas for improvement section.

We introduced a new sidebar on the left of the window which allows to navigate your workspaces and accounts. After clicking any workspace, the chat list will be displayed in the sidebar. This chat list, to the right of the workspace list, provides navigation for chats (both group chats and private chats) within the current workspace. The currently focused workspace has a colored bar indicating it’s the current context.

Below the workspace list, the sidebar lists your accounts. Clicking an account will display a page containing the contact list, your avatar, a status selector, and a list of pending notifications. Contacts in the contact list are organized by roster groups, as was already the case in previous versions.

Account context

Each account is attributed a specific color, in addition to its avatar. This color is reused in the chat list, alongside the tab’s avatar so you can see instantly which account of yours is used in a specific chat. When a given chat/account doesn’t have an avatar defined, one is generated from the first character of its displayed name.

Gajim with multiple accounts

Gajim with multiple accounts

When a notification is received within a certain workspace, an indicator with the number of unread messages will be shown on the workspace icon and on the chat.

Organizing your interface

Workspaces can be reordered manually within the sidebar by drag-and-drop. However, these two different types of context are kept separate: the workspaces appear on top of the list, while accounts are listed on the bottom. When there’s too many entries to display, the workspace/chat list becomes scrollable.

Chats can also be moved from one workspace to another, though not via drag and drop: simply right-click a chat and from there the “Move to” menu will move the selected chat to the requested workspace. However, it isn’t possible currently to copy a chat to another workspace; moving an entry to a new workspace will remove it from its previous workspace.

Within a given workspace, chats can be pinned. These stay in place at the top of the workspace’s chat list. Chats which are not pinned are ordered by latest activity. This way you never have to scroll endlessly to find the chat that matters to you. For the moment, pinned tabs cannot be reordered like workspaces, but we plan to implement it.

Try it out and let us know

There’s a lot of upcoming major changes in the next Gajim v1.4 release, so stay tuned to the blog for further information. In the meantime, you can test the new interface by running Gajim from sources using just a few commands. This feature is not published in nightly releases yet because it’s still unstable, so do not use it as a daily-driver yet.

Important: Note that you have to start Gajim with a test profile using gajim -s -p testprofile or -s -p testprofile in order to preserve your current profile. Migrating back is not possible.

  • git clone && cd gajim to download Gajim’s source into a gajim folder and moving there
  • git checkout mainwindow to browse the development branch with the new UI
  • pip install . to install Gajim’s development version and all dependencies to your python environment, then gajim -s -p testprofile to start
  • alternatively, ./ -s -p testprofile to start Gajim without installing it, in which case dependencies should be manually setup first (for example On Ubuntu)

Feedback is welcome in any form, whether on our issue tracker, in our community chat, or as a blog post on your own website. The main tracking issue for this new user interface is #10628.

Areas for improvement

In this section, we explain the shortcomings of the current implementation of the workspaces feature, and what could be done to improve it. We are actively looking for ideas on these areas, so if you can afford it, please spend some time to gather your thoughts and help us improve Gajim.


Account context relies on user-supplied colors. However, for accessibility concerns (color-blindness), we would be interested to support other graphical patterns instead of colors. For example, dots and dashes and other visual patterns that are common in graphs and tables. However, unless we get more contributions, it’s unlikely this feature will be released in v1.4.


The main window redesign does not support right-to-left (RTL) languages in a special way yet. The navigation sidebar will be displayed on the left-side of the screen in all cases.

UI customization

Some users have already expressed their anxiousness at the idea of dropping support for multiple windows in Gajim. However, there is technically no barrier preventing us from reimplementing is with our new User Interface. It’s “just” a lot of hard work.

For example, maybe we could have a mode where each account gets its own window that could move around separately? Or focus a space from the main window into its own window? That would be useful when using virtual desktops (sometimes called workspaces, what a coincidence) in your favorite desktop environment.

In addition, we could explore to support multiple sidebars on multiple axis, so that you could decide where to place your accounts list, and divide your workspace list into a top and bottom sidebar.

Only your imagination and contributions to the Gajim project are the limit for the kind of experience we can provide, but it’s very unlikely deeper UI customization will be implemented in time for the v1.4 release. We are a volunteer-run project and cannot afford to spend time to accommodate every single need there is, although contributions are always welcome.

More workspace organization

Currently, pinned tabs in the chat list cannot be reordered in the way that workspaces can be in the workspace list. Would this be useful for you?

Moreover, Gajim’s new workspaces UI currently features a 2-level representation like Mattermost, where any chat only has a single ancestor workspace. The account roster is an exception, because it features a 3rd-level nesting in order to fit roster groups, where each entry is part of a group, which is part of the account workspace context. Maybe workspaces could benefit from this approach in order to represent 3-level hierarchies akin Discord/Element interface.

Also, a chat can currently only be featured in a single workspace, for the sake of simplicity. That’s a fine assumption as long as workspaces are managed by a single user for their needs, but would not play well with sharing workspaces with other users, in which case a chat may appear more than once in the workspace tree.

Standardization and interoperability

As mentioned briefly, we’re considering how our new workspaces feature can be represented server side, so that it can be used by other clients, and maybe even shared across users.

Sharing a workspace with several users, similar to Matrix “spaces” or Discord “servers” could prove very useful for online communities administering a bunch of channels, for example to set space-wide permissions. It could also enable to subscribe to a public workspace maintained by a contact of yours, featuring a bunch of 3rd party group chats on a specific topic.

While there is not yet a specification for such hierarchical organization of chats in the XMPP ecosystem, there was an XMPP Online Sprint last winter studying Discord’s user experience in order to benefit the Jabber/XMPP ecosystem.

More recently, some people have started to gather thoughts that should lead to a specification. There is a work-in-progress document (a pad) which anyone can edit with feedback, and a group chat has been setup to discuss this issue in a cross-project manner. Your ideas and contributions are more than welcome, even if you’re not familiar with the Jabber/XMPP ecosystem. Feedback on how a new specification could be made interoperable with other decentralized networks is very welcome.


August 27, 2021 00:00

August 23, 2021


Improving Snikket's usability in collaboration with Simply Secure

One of the primary goals of the Snikket project is improving the usability of open communication software. We see usability as one of the major barriers to broader adoption of modern communication systems based on open standards and free, libre, open-source software. By removing this barrier, we open the door of secure and decentralized communication freedom to many vulnerable groups for which it was previously inaccessible or impractical.

Simply Secure is a non-profit organization working in user interface (UI) and user experience (UX) design. They specialize in combining human-centered design with the complex technical requirements of privacy-first secure systems. Our first introduction to Simply Secure was while contributing to Decentralization Off The Shelf (DOTS), a unique and valuable project to document and share successful design patterns across the decentralized software ecosystem.

Now, thanks to funding from the OTF’s Usability Lab, we’re pleased to announce that Simply Secure will be working with us over the coming months to identify issues and refine the UX across the project, with a special focus on our iOS app.

We’ve made a lot of progress on the Snikket iOS app recently, largely based on valuable feedback from our beta testers, and we are getting excitingly close to a general release. However there is still some work to be done.

The expert folk at Simply Secure will be performing a usability audit of the current app, as well as conducting usability testing, which is the study of how people use the app, and what struggles they face while completing specific tasks.

Using information from these analyses the Simply Secure team will assist with producing wireframes (sketches of what the app’s interface should look like) and actionable advice to improve the UX of the iOS app and Snikket as a whole. You will find information on how to participate later in this post.

What is UX anyway?

The modern UX design movement is a recognition that technology should be accessible and easy to use for everyone. Good design can assist and empower people, poor design can hinder and even harm people. The need for design goes far beyond making a user interface look beautiful. Software that is not visually appealing may affect someone’s enjoyment of an application, but an aesthetically-pleasing interface is not magically user-friendly.

Therefore designing for a good user experience is about more than just making the interface look good, it’s about considering how the software fits into a person’s life, what they need from the software (and what they don’t need) and how they expect it to behave.

These are tricky things to get right. Every user is different, and a broad range of input must be taken into consideration as part of a good design process.

UX methodologies

There are various ways to gather information useful for making informed decisions about UX improvements. A common easy and cheap approach is to add metrics and analytics to an app. This can tell you things like how often people tap a particular button, or view a particular screen. Developers and designers can use this information to learn which features are popular, which should be removed, or made more visible.

This approach has drawbacks. Firstly it only tells you what users are doing, it doesn’t tell you why they are doing it, or what they are thinking and feeling - for example if they are frustrated while looking for a particular feature or setting. Metrics can tell you that making a button more prominent increased the click rate, but it won’t tell you if half the users who clicked on the button were expecting it to do something else! This isn’t really going to give you enough information to improve usability.

Another significant drawback with a focus on metrics is the amount of data the app must share with the developers. People generally don’t expect apps on their device to be quietly informing developers about the time they spend in the app, what they look at and what buttons they press. Such data collection may be made “opt-in”, and there are modern projects such as Prio, working to bring privacy and anonymity to such data collection through cryptographic techniques.

A wildly different but much more valuable approach is to directly study people while they use the app - a technique known as “usability testing”. Unlike silent data collection, usability testing directly pairs individual users or groups with an expert while they are asked to perform specific tasks within the app. Although this requires significantly more time and effort it produces more detailed and specific insights into the usability of an interface.

Advantages of this kind of study include the ability to listen and learn more deeply the needs of specific types of users, particularly minorities whose problems could easily be drowned out by larger groups of users in a simple statistics-driven data collection approach. It also allows you to capture peoples' thought processes, by asking them to explain each step as they complete tasks within the app.

Participation and looking forward

We can’t wait to begin our first usability testing facilitated by the experienced team at Simply Secure, and incorporate their findings into Snikket’s development.

If you’re interested in taking part, or know someone who would be a good fit for this project, we’d love to talk to you for 30 minutes to better understand how to improve Snikket. There will be no invasions of privacy as a result of this research. All identifying information will be removed. We will take all necessary and appropriate precautions to limit any risk of your participation. Anything that we make public about our research will not include any information that will make it possible to identify you. Research records will be kept in a secure location, and only Simply Secure and Snikket personnel will have access to them.

Appointment slots are available from 24th August to 3rd September. To participate, register your preferred time and date on the calendar here1.

Further reading

  1. Google Calendar currently, sorry 😕 ↩︎

by Snikket Team ( at August 23, 2021 10:00

August 19, 2021


Newsletter: Blog, New Registration, New Billing, New App!

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it's been a while since you checked out JMP, here's a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client. Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

In case you haven't seen it yet, we now have an XMPP-powered blog! All newsletter updates, as well as other content like technical deep-dives will be published there. If you just want these updates, don't worry, the mailing list isn't going away. You can check out the blog at and follow in your RSS reader or compatible Jabber client such as Movim or Libervia.

JMP also has a new registration flow. This flow properly integrates with our new billing system and represents a lot of behind-the-scenes work to our architecture. The most important part of the new billing system is the referral system. That's right, JMP users can now get single-use invite codes to refer users. The new user gets one free month, and if they decide to upgrade to a paid account the original user will get a free month of credit too! XMPP server operators for closed or vetted groups can also contact support to ask that their server be added to an approved list where all Jabber IDs coming from that server will be given a free month, with the resulting credit if they upgrade going to the server operator.

Speaking of our new billing system, many users have been fully migrated to the new architecture which says goodbye to PayPal and hello to automated credit card and Bitcoin deposits, as well as official support for payment by mail or (in Canada) Interac e-Transfer. Payments can also be made in Bitcoin Cash by contacting support. Users on the new system now have a prepaid balance they can top up any time they like, with the option to automatically top-up a low balance with any amount $15 or more from credit card. Deposits over $30 get a 3% bonus added, and deposits over $140 get a 5% bonus. This paves the way for calling minutes over 120 / month (which will soon be available at the rate of $0.0087 / minute) and also international calling at per-minute rates to be announced later this year. Those who prefer to pay the same amount every month or year, as is done with our legacy PayPal system, will need to wait a bit until we integrate that option into the new system.

We've also had a volunteer working with us to prepare some new features for Android users, most notably DTMF (punching in numbers during a call) so that all phone calls can be done from inside Conversations. The code isn't quite ready for upstream yet, but drop by the chatroom if you want to try out a prototype.

As always, if you have any questions, feel free to reply to this email or find us in the group chat per below. We're happy to chat whenever we're available!

To learn what's happening with JMP between emails like this, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by singpolyma at August 19, 2021 00:30

August 18, 2021

Erlang Solutions

FinTech 2021 State of Play

While things have undoubtedly changed considerably for the financial services industry over the past 18 months, the ascendency of FinTech remains quite unabated, with global FinTech investment reaching $98bn. In the UK, FinTech investment hit a new record of £18bn in the first half of 2021, placing it second only to the United States, impressive during a time of considerable uncertainty brought on by the pandemic and short term Brexit fallout.

The FS industry has proved extraordinarily resilient and indeed many segments have even thrived during this period, such as FinTechs operating around digital payments and processes. 

The overarching trends for the industry are the accelerated digitalisation of banking, adoption of embedded finance (including buy now, pay later) and decentralised finance to further democratise access and opportunity to financial services. In this post, we take a look at these along with some general high-level trends in software engineering in the sector.

Digital payments and eCommerce growth

Disruption and innovation in payments technology are constant; we have been at the cutting edge of real-time payments through our work with Vocalink (a Mastercard company) to build their Immediate Payment System used globally by the likes of The Clearing House in the US and the P27 group in Scandinavia.  A recent Mastercard report found the first quarter of 2020 had a larger shift towards digital payments in 10 weeks than in the preceding five years and that consumers spent nearly $900bn worldwide with online retailers in 2020. Less than a year since contactless limits increased across Europe, Visa has hit one billion additional touch-free transactions, 400 million of which took place in the UK. 

In light of this surge in digital payments volume and to better align with customer preferences and capitalise on advances in payments technology, stakeholders such as issuers, networks, payments processors, and merchant acquirers are investing heavily to retool their payments systems. Meanwhile, embedded finance, point-of-sale lending and buy-now-pay-later financing products are reshaping the lending and payments experience to create faster digital options with less friction. 

Millennials and Generation Z were already used to managing their financial affairs through digital channels, and they are now joined by many other demographics and late adopters. This means there will not be a significant rollback to the old ways of interacting with financial products, presenting a clear opportunity for innovators within B2C FinTech.

FinTech Software Engineering Trends

Over recent times, leading banks were already modernising as part of a strategy to exit or manage legacy core systems that inhibit faster and more transformative technological innovation; more resources will now be diverted to the strategy. Those slower to embrace genuine digital transformation have been pushed towards modernising with short term fixes that will require permanent solutions.  

From our position as specialists in soft real-time distributed backend systems, what has become abundantly clear is that resilience and scalability are not a given for all FinTechs, neither is it good enough as an afterthought. The risk to reputation and trust when problems occur due to system stress can inflict damage that is hard to recover from. Using technologies that have scalability baked in and are highly reliable such as Erlang and those that run on its BEAM virtual machine, may not only help avoid technical debt for startups but may actually save your entire business during unexpected challenges like those of the current moment. Interestingly, a lot can be taken from Erlang’s success in the telecoms industry (having originally been developed at Ericsson) which can be applied to many FinTech use cases – check out this post by our Nordics MD, Erik Schon: How Telcos can help FinTechs succeed.

In successful FinTech stacks, services that are loosely coupled and readily upgradeable are the norm. It’s advisable to not rely on just one software vendor to avoid damaging lock-in. Instead, FinTechs build their own ecosystem of high-performance technology providers that can be added to, upgraded and replaced as required. 

On the product side, constant iterations are necessary to stay ahead of the competition. The providers that meet or surpass customer expectations by offering value-added services designed for specific segments are building loyalty and taking market share. Although the clients facing frontend must deliver from a CX and UX perspective, this has to be backed with reliable infrastructure with minimal downtime and other disruptions. Read our summary of Memo Bank’s adoption of Elixir for full-stack development; they have just successfully secured a new funding round of €13 million.

As I previously stated, these trends are not new, they are in fact much followed principles in the software engineering world and in FinTech, but they are proving even more important as guiding principles in the new environment we are working in.

DLT / Blockchain

In terms of media spotlight, cryptocurrency has been the most talked-about part of how monetary models are changing. We have worked with Distributed Ledger Technology (blockchain), the underlying technology of crypto, for quite some time and recognise the potential to offer exciting enterprise solutions in FS. Blockchain network and protocol layers are now widely accepted as robust and stable, ready for innovation to move to the application layer where the real opportunity for differentiation lies. We recently co-hosted this interesting panel debate on blockchain use cases in FS as part of Fintech Week London – Get the recording here.

As the higher stack application layer is more nascent, this is where using developers who are experienced becomes especially valuable to avoid wasting time and resources. The Erlang Solutions team consists of domain experts who have been involved in a wide variety of innovative blockchain projects. We have observed PoCs are increasingly being replaced with deployments to real production systems in FinTech and beyond. While most use cases remain private and permissioned, this is still an encouraging indicator for anyone interested in leveraging the technology. 

What’s Next For FinTech

The FS industry will need to make permanent some of the learnings of the lockdown periods to create more agile workforces which boost productivity, creativity, and collaboration. FIs will look to increase investment in FinTech to stay competitive, not only in customer facing digital tools but also in the back office space as a means to improve processes and reduce costs. Software engineering has become the core of value creation, and the methods used can significantly influence business results, especially in fast-moving sectors such as FinTech.

Erlang Solutions have over 20 years of experience building critical digital infrastructure that scales to billions of users without downtime. Talk to us about how we can help you develop a future proof system that is faster, easier to maintain, more reliable, and cheaper to run. You can read our Founder and Technical Director, Francesco Cesarini’s account of how Erlang Solutions have applied lessons from the 2008 crisis to our operational model today.

To receive our whitepaper report on the Trends In FinTech 2021 please, sign up for our FinTech Matters newsletter here where you can receive that and other exclusive industry content (you can unsubscribe at any time).

The post FinTech 2021 State of Play appeared first on Erlang Solutions.

by Michael Jaiyeola at August 18, 2021 12:00

August 17, 2021


Adventures in WebRTC: Making Phone Calls from XMPP

Normally when I'm writing an article, I prefer to not front-load the article with a bunch of technical stuff and instead bring it up organically. This article is different, though, and if anyone is going to get anything out of this I've got to set up a bit of background, so bear with me. I'll go into more detail on these things when it makes sense, but I have to at least introduce the players in our little play.

First, we have calls. You probably know about these, phones aren't exactly new, but I want to clarify that these are voice calls, as you may expect from a phone. One person calls the other, it rings, they pick up, and then audio conversation can be had.

Second, we have XMPP which is a chat protocol that's been popular in the FOSS (Free and Open Source Software) community for years. It has also made appearances in various commercial offerings, from Google's GTalk, Facebook's Messenger (at one point), and others. The big feature XMPP has compared to other chat protocols is that it's a standard, which means there are multiple implementations of both clients (the program the user uses) and servers (which host the user's account). They also have a large collection of extensions which extend the standard to provide more features which clients and servers can choose to implement. Also importantly it's "federated", which means users of one server can talk with users of another server seamlessly.

Third, we have Jingle, which is one of the previously mentioned extensions that allows two XMPP users to setup a peer-to-peer data connection, in this case for exchanging voice data.

Fourth, we have Asterisk. Asterisk is an open source telephone system. You can use it to receive or send phone calls, setup phone trees and extensions, etc. Because it supports many different protocols for sending voice data it can be used to connect a call from one protocol, like Jingle, to another protocol.

And finally, JMP, which is the company I'm working for. JMP integrates XMPP with the mobile phone network, giving XMPP clients a user they can contact representing any phone number, and we'll turn the chats and Jingle calls into SMS messages and phone calls and vice-versa. This allows JMP's customers to use a client of their choice, across mobile devices and desktops, to communicate with people who are still using SMS and traditional telephones and haven't moved to XMPP yet.

Ok, now we can establish the starting point of our story. We have SMS working in both directions already, and thanks to the work of my co-worker singpolyma and a user named eta's patches, we already had phone calls coming in properly. But allowing our users to call-out to phone numbers, that is, from XMPP to cell phones, wasn't working yet. We figured it was just a small tweak to the existing setup, so I set out to find the simple change that was required.

And now, over a month later, here's the path I went through, with gratuitous technical details along the way.

My Initial Testing Setup

It started out pretty simply. I have an app on my phone, Conversations, which is one of the XMPP clients we recommend to our users. I tested that I could receive calls, and everything worked out well. But, when I tried calling a special user created just to test outbound calls my phone would ring, and I would answer it, but the app would just see "connecting" forever.

I tested a normal Jingle call to another user of the same app, and it worked in both directions, so I knew the app worked fine. I didn't see anything in the logs from my phone to describe what might be the problem, so I had to look somewhere else. There were some logs in Asterisk, which is acting as a bridge between Jingle and the phone network, but nothing that stuck out right away. In order to get a more information about what was actually happening, I wanted to get into the code and put my own debugging logic. Luckily, Conversations is FOSS, and so that's very possible. But, it was also inconvenient to install a development build of Conversations onto my phone, because that would replace the app I used every day. I also wanted to use more tools I had on my computer, so I decided to install Movim instead. Movim is another XMPP client that supports the same voice calling features, but it runs on the computer. In a web-browser, in fact. Normally it would run as a hosted setup, where someone runs a Movim server for you out on the web, and you just connect to it as a web-page, but given what I wanted to accomplish I had to run the server myself on my computer so I can get to both sides of this conversation. A quick test confirmed that Movim could send and receive XMPP-to-XMPP calls just fine, but had the same issue with XMPP-to-Phone calls, so it wasn't just a Conversations bug.

I was now ready to start actually digging into this problem.

Initial Jingle Debugging

XMPP, as a protocol, works by sending XML stanzas over a connection. This means it's very easy to just look in some logs and see what's actually going on. For example, if I were to send a normal message it may look something like this:

<message type='chat' id='dd75a234-8f44-46ff-bd37-c878d04aef92' to=''>

That makes it pretty easy to debug, which is why that's where I started. My initial theory was that there was something wrong with the messages being forwarded back and forth from my users to Asterisk that was making it impossible for them to talk to each other. To investigate that, I changed my local version of Movim to log every XMPP blob it sent and received, so I could inspect them with my human eyes and brain to make sure they look legitimate. To my disappointment they did appear to be relatively sensible on both the Asterisk and Movim side; the information they exchanged seemed to contain accurate addresses and formats. It would have been nice to have been done here.

Oh well! Undeterred I traced through the Movim code to figure out where the information eventually ended up. Somewhere in the code Movim must use these exchanged addresses to establish the voice connection. Eventually I got down to a line in the front-end JavaScript that just took the result, changed it from the format of the data and then called setRemoteDescription on some peer connection object. That's part of the WebRTC APIs. Crap.

Detour 1: WebRTC

WebRTC is a set of APIs supported by modern web browsers that allow users to establish peer-to-peer connections. Normally when a user is visiting a webpage, even a page that gives the appearance of interacting with other users, that interaction is actually done through the web server. To illustrate the difference, let's assume that two people, named Romeo and Juliet, are using a webpage to chat. Romeo visits the webpage; contacting the server and requesting the chat page, which is sent to his browser to show him a box to type things in and the messages he's already received. He sees a new message from Juliet, so he types his response into the box and hits send. What actually happens is that Romeo's browser sends the message he typed to the server along with information about who should receive the message. The server will take the new message and store it along with the other messages in the list of messages Juliet has received. At some point in the future Juliet will load this page and in doing so the server will send her all the messages that have been stored for her, including this new one. If the site is fancy, she won't even have the refresh the page to get them! Maybe it's periodically asking the server if there are any new messages so long as she's on the page, or maybe there's even a WebSocket waiting to be pushed messages, but in either case Romeo's message goes from his computer, through the server to get stored, and then is retrieved from that server when Juliet's computer requests the messages.

That's fine for occasional, short, bits of text or even the occasional picture. But if we imagine that Romeo instead wanted to start an audio or video chat, that's a whole other thing. First of all, it's a lot more data and it's constantly generating more and more data as the call goes on. And that information is actually doubled, because Romeo has to send it to the server, and then the server has to put it somewhere, and then Juliet has to also pull it back down. So it would be nice for the server operator if they could send the messages directly to each other, rather than involving the server.

The second reason is latency, or how long it takes for a sent bit of data to be received. Even if the server could handle all the data Romeo was sending it, it will usually take longer for the data to go from Romeo, up to the server, then from the server down to Juliet, versus just going directly from Romeo to Juliet.

So the way WebRTC works is that the user's browser has support for a bunch of standards for forming peer-to-peer connections (person-to-person, user-to-user, browser-to-browser, whatever you want to call them). If Romeo wanted to start a video call with Juliet, he would send her a special message through the server as normal, but this message would contain information on how Juliet could contact him directly. If she wants to talk with him, she would respond with a special message of her own (also through the server) containing information on how she could be contacted directly. Code on each of their webpages would take the special direct-contact information the other party sent them and give it to the browser through the setRemoteDescription method I mentioned earlier to signal to the browser that it would like a direct connection to be established. The page doesn't have to know or care how that happens, it will just be told when it works or doesn't. And life will be good.

Back to Jingle

Ok, so if life should be good, why was I unhappy that the information I was tracing was going directly into setRemoteDescription? Well, because life wasn't good and things weren't working. And more importantly, it wasn't as easy to figure out what's going on in WebRTC. I had the code for Movim, I'd already changed it to get extra diagnostic information and I could easily add more code to tell me more things, but the browser's implementation of WebRTC is a bunch of C++ that's built directly into the browser. Even though the browser is FOSS, so the code is available, it's still much harder to make a change to it, build a new version of just that part, integrate that into the entire browser, and then build all of that just to test one small change.

So I wanted to avoid that if I could. For now, things were still ok. WebRTC is something the browser knows about, so it should have some kind of tools for WebRTC app developers to help debug things. I was using Firefox so I searched for "Firefox WebRTC debug" and got this. I installed that tool and was instantly disappointed. It told me essentially nothing that I couldn't have found out already. It doesn't give any insight into the inner workings of WebRTC, it just gathers up the events and properties any app could have subscribed to, and subscribes on my behalf, giving me a list of updates. I guess that's better than having to write that code myself, but in this case I just saw a list of updates that seemed reasonable, and still nothing worked. Not very helpful.

screenshot of Firefox's WebRTC debugging UI

The situation looked a little better on Chromium, so I switched over to that. There's a built-in special URI chrome://webrtc-internals/ that provides on the left of the page the same log of events that Firefox's plugin did, but on the right allowed me to peek inside all the various data structures in their current state and see what was going on. Finally I was getting somewhere!

screenshot of Chromium's WebRTC debugging UI

I started by comparing working calls (inbound calls) to broken calls (outbound calls), to get a sense of what was different about them. There were a few false starts here that ended up coming down to randomness instead. It seemed like inbound calls always had some events in one order, and outbound had them in another, so maybe it was a race condition there! But then 1 in 5 inbound calls would actually have them in reverse order and it would still work. It looked like maybe it would work only if srflx candidates were chosen (don't worry about what that means), but then it wouldn't again. I was grasping at straws trying to find a pattern. While I was looking through properties, I did notice one strange thing: one of the pieces of information that is exposed is the number of bits flowing through each candidate pair and each interface. I noticed as I was looking around that the peer connection's inbound data rate was 0, but one of the interfaces had data streaming into it. It just wasn't the interface that had been chosen for the connection... That was weird. It meant Asterisk was trying to send data to the wrong place. To understand what that meant we have to go on another detour!

Detour 2: NATs and ICE

So what are these "candidates", and "addresses", and why are we exchanging them, pairing them up, and choosing them? This is part of the standard called ICE (which has a newer version of the standard also called ICE). This standard also relies on another standard called STUN and optionally an extension to STUN called TURN. That all sounds pretty complicated, but what is it all for? The problem is NATs. First I'll give the basic, oversimplified, version of the problem. Then the basic, oversimplified, version of the solution.

The Problem: NATs

The way practically the entire internet works is that each device is given an address. Then, when one computer wants to talk to another, it wraps the data up into a packet, addresses the packet to the other computer, and then pushes it out the wire (or through the air) onto "the network". If the computer on the other end of that wire (AKA, the router) is the one the packet was addresses to then the packet is received! If not, but the router knows where to find that computer, the packet is sent along that wire and bounces along from computer to computer until it reaches its destination. There's a problem with this, though. In the original design for the internet the addresses were given enough digits to allow for roughly 4 billion different addresses in the best-case scenario. In reality it was much fewer than that. In the 80s that may have seemed like an enormous amount of computers, but these days a single human may have a laptop, tablet, desktop, and a phone. They also may have a thermostat, a fridge, a speaker, a television, and various light bulbs and sensors that are also on the internet. Even in the 90s a company may have had a building full of hundreds or thousands of computers, all for people that had their own computers at home. There was just more demand than expected. So, for this reason (among others) NATs became common.

The idea of a NAT is relatively simple. We have one side, the Local Area Network (LAN), and another side, the Wide Area Network (WAN). Think "inside my network" and "the internet". Inside my network I can give computers whatever addresses I want, so long as it's unique to only the computers that are inside my network. There are some strong recommendations on what kinds of addresses to give out, but technically I can do whatever. So if I have a LAN, and you have a LAN, we can both have computers on our LAN addressed "", and that's fine because the address is "local" to our own networks. So if one computer on my local network wants to talk to another, it works just like I described before: "" produces a packet for "" and sends it along its cable. The router sees the packet for "" and thinks "Ah, that computer is down this wire" and sends it along, and "" receives it. All is as it was.

But what if "" wants to talk to a computer on the internet? It addresses the packet the same as before, and sends it to the router the same as before. The router will look at the destination address and see that it's on the WAN side, which means the data has to go out onto the net. But, the internet is really only useful if packets can be responded to. I want to ask the internet for something, and get an answer back! But the router can't just tell the other side that this traffic is from "", because that's a private "local-only" address. Multiple networks could have a computer with that address and there'd be no way to figure out which was which! So what the NAT does is to rewrite the packet to put the router's own WAN address as the source of the message before forwarding it on. That way, if the other side does respond, then the response will come back to the router, and the router can figure out which request that response was for, rewrite the destination to be the original sender of the packet, and then forward the response into the LAN, rewritten to look like it's for the proper computer. The final piece is that packets aren't just addressed to a computer, but a combination of computer and "port" so the computer can handle multiple independent connections and know which data is associated with which connection. So if a packet goes out from computer "" and port "537", then the NAT might give it port "537", or "700", or "12432" on the WAN side. It can pick whatever it wants, what's important is that it remembers its choice. Then when a response packet comes back to port 12432 on the public side it can look up in it's table to see that really means at port 537 on the LAN side, so it knows how to rewrite it properly.

So, that's lovely. Now network operators and consumer network equipment can assign whatever addresses they want on the LAN, and none of it reduces the amount of addresses there are out on the net. People can still talk to the internet, and just works the way the computers expect. You can even put one NAT on the LAN side of another, and then put that on the LAN side of another NAT, and have a tree of NATs! Each packet that comes through gets translated by each layer, and then forwarded on to be translated by the next, and when a response packet comes in each layer performs the transformation and forwards "down" the tree until eventually it gets back to the original computer. So what's the problem? Why bring any of this up? It's because this system only works if everyone inside a NAT only ever wants to reach out to things "on the internet", like servers. But if you remember why you're reading this in the first-place, WebRTC is trying to setup direct peer to peer connections between two different internet users without having traffic go out to a server at all! NATs can't work in that way, because the whole point of their translation is that the computer's address is not something routable on the general internet; it's a local address that only exists within its own LAN. There's nothing you can do to tell a NAT, or the top NAT in a tree of NATs, how to send a packet it's received that it didn't already have a translation saved for

The Solution: STUN, TURN, and ICE

That last sentence really is the key to how we're going to overcome this limitation. If I want people to be able to talk to my browser from anywhere on the internet, what I can do is first send some packet out somewhere so my NAT and any other NATs between me and the internet will all make a translation that will allow responses to that packet to find their way back to me. Then, if somehow the other browser I wanted to communicate with could know the last layer of that translation, the "public" address and port that eventually made it onto the real internet, then they could send traffic there and all the NATs would perform their translation and I would eventually get the packet they sent! Magic! The problem, though, is that I don't know what that final "public" address and port are. Most routers don't expose that kind of information to the LAN, and even if they did there's no guarantee that their WAN address isn't just some other NAT's LAN address. Really we don't care about my router, or how many NATs there are, we just want to know what the internet sees.

So what STUN does is define a format for packets that one can use to find out this information. Then people can run STUN servers, either for public use, or maybe specific to the particular app that's looking to communicate. The STUN server runs on the public internet, and when I send it a packet asking it who I am, it will respond with a packet back to me. But inside that packet it will include the address and port it saw my request come from, which allows me to know my own "public" identity. It would be like calling someone to find my own phone number by asking them what number they see. There's more to STUN than that, but that's all we need for now.

This will work on many NATs, but not all, because of course it has to be difficult. Some NATs go further than what I've said so far. Rather than just remembering an association between a source address and port and whichever public port they pick, they remember all of the source address and port, and destination address and port, when picking an output port. That means if I send a packet to the NAT might pick port "3425" to use, but if I send from the same port on my end to instead rather than re-using 3425 it will pick a totally different port. Because of this, if I send a STUN packet out to address and it tells me my public address and port, and I give that to, then when it uses that address to try and talk to me it won't match my NAT's table, because there are no entries for at all. No packets were ever sent to that address, so no entry was made. These are called "symmetric NATs" by STUN, and completely ruins our plan.

To get around this TURN was created as an extension to STUN. TURN allows another kind of request to a STUN server which asks the STUN server to allocate a port on the STUN server itself, and forward any packets it receives there back to me. Then the STUN server tells me what its own address is, and what port it picked for me (very much like an opt-in NAT, actually). So now, even if I have a symmetric NAT, I can still give the other person the STUN server's address and my port there, and I know those packets will make their way back to me down the same connection I used to make this request. This isn't ideal. It's not really peer to peer anymore. Packets are still going through another server, which means the TURN server (really the STUN server that supports TURN) will need to be able to handle the volume of traffic, and there's still an extra jump adding latency.

There are a few silver linings, though. The first is that if Romeo needs to use TURN, that doesn't mean Juliet does. So any packets Juliet sends to Romeo may go through another server, but packets from Romeo to Juliet can go direct, which is still half as much traffic to the TURN server. Even if they both need to use a TURN server, there's no need for them to use the same one. That can mean that Romeo is talking to and Juliet is talking to, so each of those servers is only seeing half the traffic. Also, TURN servers are able to be more simple than app servers. They don't have to put anything into a database or anything like that, they just turn each packet they receive into one they send, without even knowing what's in it. This allows them to process more packets per second. And finally, it means the app developer doesn't have to implement both a peer-to-peer mode and a "that didn't work" mode. If they run TURN servers, then at the very least it will use that and run the same as if it was truly peer-to-peer.

So we have our actual addresses, which might work if we aren't behind a NAT. We have STUN addresses which will work for many people that are behind one or more NATs. And we have TURN addresses which should work for everyone. But TURN is more effort, so we'd rather use STUN if that works. And even STUN is more work than it needs to be if direct messages work, like if both of our two users are behind the same NAT. So what we need is a way to figure out which of these work and then pick the best one. This is what the standard ICE adds to STUN and TURN, and where we get to the "candidates" I mentioned earlier.

With ICE Romeo would find all the network addresses his device has, and maybe use STUN to find his public addresses and ports, and maybe even TURN to get a new public address and port. The idea is that any of these might work; they are "candidates" Juliet may be able to use to send packets to. Juliet will do the same thing on her end and get her own list of candidates. Then they send these lists to each other, then they would attach each of their own candidates with each of the other side's candidates and get a list of candidate pairs. So if Romeo had "A, B, C" and Juliet had "X, Y, Z", then both sides would make a list of candidate pairs "(A, X), (A, Y), (A, Z), (B, X), (B, Y), (B, Z), (C, X), (C, Y), (C, Z)". Now they go through each pair sending packets from their candidate to the other candidate. If one of them receives a packet then that means this direction works, at least. They respond with a success message, and then they will immediately send their own check for that pair back, if they hadn't already, to see if it works the other direction too. At the end of this process we will have tested all candidate pairs and we'll have a list of the ones that worked in both directions.

Along with the candidates ICE also has us keep track of what kind of candidate each one is, "host" for network addresses the device has directly, "server reflexive" ("srflx") for the ones we got from STUN, "relayed" ("relay") candidates from TURN, and "peer reflexive" ("prflx") which only comes up in cases where one side successfully receives a packet from an address that isn't otherwise a candidate, meaning there's some other network quirk between the two users. Each of these candidate types is given a priority based on our preferences, for example we'd prefer to use host over srflx, and we'd prefer to use srflx over relay. Then we can combine the priorities of each side of a pair to get a priority for each working pair. At this point we can simply pick the best working pair, given our priorities.

There is one last wrinkle. Networks are complicated, and sometimes things go missing. As a last protection against this, one side of ICE (conventionally the person making the call) is nominated as the "controlling" peer. The controlling peer gets final say in which pair is actually nominated to be used. So once we have a pair we like, the controlling peer sends a request again with a "use-candidate" value to tell the other side that this is it, the other side responds back "ok" so everyone knows we're all clear and that communication on this pair still works. At this point we're finally done with ICE and we have a pair of ports that can be used to talk from peer to peer.

Back to WebRTC

Ok, that was pretty in-depth. Let me remind you where we are here: I'm looking in my WebRTC debugging tools and I'm seeing that the connection object isn't getting any data, and the candidate pair chosen to represent that connection isn't getting any either. But there is a candidate pair that is getting data, it's just not the right one!

So my first thought on how this could happen is that maybe there's a disagreement between Movim/Chrome and Asterisk on which of them is the controlling peer! That would also make sense why inbound calls work, because the caller is always the controlling peer. So if Asterisk thought it was the controlling peer in both cases, then Chrome would agree on inbound calls, but disagree on outbound calls. It felt pretty right. There is a section in the ICE standard on how to solve a situation like this, but maybe it wasn't being followed properly. Here's the problem... this is a pretty internal detail of the implementation of ICE in these two pieces of software. I didn't want to take our production Asterisk server down and add a bunch of logging here, maybe even breaking it in the process. And like I mentioned before I wasn't excited to rebuild Chrome just to test this. I spent a little bit of time looking to see if there was some implementation of ICE that was simpler that Chrome that I could use as a stand-in and be more free to make quick changes to, but nothing jumped out that wasn't going to be hard to adapt to my actual use-case. I lamented: I don't even need logs, what I really wanted was some way to see the data they're sending back and forth without modifying anything. Oh... wait a second...


At this point it became clear that I had been working in web and other special areas for too long. I had been searching for so long for a way to inject logging statements into this flow somehow so I could see what was on the network, when I should have immediately reached for Wireshark. It had previously been a tool in my toolbox, but I hadn't touched it since everything became all-web-all-the-time. Wireshark is a program that just records all of the packets your computer sends and receives, and shows them in an interface that makes it easy to filter, search, and inspect. I didn't need the programs to log what they thought they were doing, I could inspect what they actually did and follow along that way! What's even better is that Wireshark already knows about STUN and TURN, so it can show me the different fields without me having to know how to unpack the bits from the packet myself!

screenshot of Wireshark UI

See here how I can search for "stun" and it'll only show me the packets for STUN? Also notice that I can expand the "STUN" attributes, because it knows about them, and see in plain terms "this is a binding response" and my IP, and also the tiny diamond on the left shows the corresponding request that this response is to. Very handy stuff. Much better than logging.

If you remember what I mentioned before, ICE has bits in the standard that try to correct for the situation where both sides think they're the controller. In order to do that, they declare on each request whether they're making it as the "controlling" or "controlled" party, which means it was easy to figure out if each side thought they were the controlling peer. Sure that this was it, I looked; everything looked like it was to spec. Fuck.

Ok, if Asterisk knew it wasn't the controlling peer, why wasn't it using the candidate pair the controlling peer was nominating? Now that I had real data that was being exchanged, I could start going from packets, to standard, to code, and back, to try and trace out how everything was actually working. After tracing around for a while on how I expected the flow to progress I found a real problem!

    char address[PJ_INET6_ADDRSTRLEN];

-   if (component < 1 || !ice->comp[component - 1].valid_check) {
+   if (component < 1 || !ice->comp[component - 1].nominated_check) {

-       pj_sockaddr_print(&ice->comp[component - 1].valid_check->rcand->addr, address,
+       pj_sockaddr_print(&ice->comp[component - 1].nominated_check->rcand->addr, address,
            sizeof(address), 0), 0);
-       pj_sockaddr_get_port(&ice->comp[component - 1].valid_check->rcand->addr));
+       pj_sockaddr_get_port(&ice->comp[component - 1].nominated_check->rcand->addr));

 /*! \brief Destructor for locally created ICE candidates */

This code was toward the end of the ICE session where we're taking the result of ICE and preparing to return it to the main code. Every time we get a response to one of our checks it's marked as "valid", and that valid_check field is updated when a new check is found to be valid and has a better priority than what's stored there. That way by the end we will have the best priority valid check easily in reach. Also, any time the controller nominates a candidate we do a similar thing and store the result in the nominated_check field. But here, at the end, we're not using the best nominated check, only the best valid check. That will just work so long as the best valid check is nominated, which is expected by the standard, but not technically required and also not what I was seeing. This is great! Finally something that explains what I was seeing. Asterisk sending to a candidate that wasn't nominated.

So I deployed that and confidently ran it. Still didn't work. That was disappointing.

Ok, back to Wireshark looking for other weird things. Paying closer attention to the actual flow I was seeing, I noticed that a request would go out for one pair and I'd get the response properly. But that's it. If we remember the standard, there's supposed to be an immediate request in the opposite direction to test that direction, but now that was looking I only saw responses to my requests, and never the requests originating from Asterisk. This is important because the ICE negotiation isn't fulfilled until the other side makes these requests. From our perspective only one direction works, and it looks like we can't actually talk to Asterisk using the same channel Asterisk can talk to us on. So we keep trying to nominate a pair, but never receive the expected opposite request to confirm to us that this pair is good, so we try again, and again, etc.

This gave me an area in the code to look at, at least. What's worse, this code actually lives outside of Asterisk's codebase, in an external library made just for Asterisk called pjproject. After some investigation and comparing to the standard I noticed this section:

 *  Triggered Checks
 * Now that we have local and remote candidate, check if we already
 * have this pair in our checklist.
for (i=0; i<ice->clist.count; ++i) {
    pj_ice_sess_check *c = &ice->clist.checks[i];
    if (c->lcand == lcand && c->rcand == rcand)

/* If the pair is already on the check list:
 * - If the state of that pair is Waiting or Frozen, its state is
 *   changed to In-Progress and a check for that pair is performed
 *   immediately.  This is called a triggered check.
 * - If the state of that pair is In-Progress, the agent SHOULD
 *   generate an immediate retransmit of the Binding Request for the
 *   check in progress.  This is to facilitate rapid completion of
 *   ICE when both agents are behind NAT.
 * - If the state of that pair is Failed or Succeeded, no triggered
 *   check is sent.


So this comment specifically references part of the ICE standard, section, in deciding when to send these triggered checks back with the same candidates after receiving a check from the other side. The problem is that it's actually wrong, the standard actually says:

If the state of the pair is Failed, it is changed to Waiting and the agent MUST create a new connectivity check for that pair (representing a new STUN Binding request transaction), by enqueueing the pair in the triggered check queue.

So in their implementation if I've already tried and failed to contact you with a candidate pair, and then later I get a request from you with that pair, I should ignore it. The standard, though, says I should instead try again on that pair, since it may have just started working. That's kinda weird though. If there was an occasional failure and it mysteriously didn't work one in every hundred calls, maybe this would be to blame, but this failed consistently. Every time. What's up? Well, a good place to start is how things end up in the failed state. The code had a timeout hard-coded where it'll retry each request 7 times, with one second between attempts, before deciding the pair doesn't work. That makes some sense, but looking at my Wireshark session I noticed something important.

The way Jingle works, when Romeo clicks the "call" button it sends a request to Juliet's device saying "I'd like a call please", and then it starts gathering candidates and sending them over. Juliet's device shows an incoming call screen or something to ask if she'd like to pickup, and when she answers yes she sends her candidates back to Romeo so they can start negotiating. But in this case we're not calling Juliet's device, we're calling Asterisk. The way Asterisk handles this is that it starts ringing on the phone network, and to speed things up in the meantime it starts gathering candidates and builds an ICE session with Romeo's candidates. This means the ICE session has already started before the other person's phone has even started ringing! Then, if the other person accepts the call, Asterisk will send its candidates down to Romeo with the session acceptance, so Romeo can start his ICE session with Asterisk's candidates.

That means that while the phone has been ringing, Asterisk has already been trying Romeo and finding no one's responding (because Romeo hasn't started his ICE session since he hasn't seen any candidates yet). So after 7 seconds Asterisk decides the candidates don't work. Later, when the call has been answered, Romeo starts ICE and starts sending out ICE messages and gets responses, but doesn't get any triggered checks, so he assumes there's something wrong with his responses and that the channel isn't actually working. So he keeps trying to get through and nominate things, but it never works. Ok!

So, assuming that's our problem, there are a few ways to fix it. The first thing we could do is change the ICE triggered checks to be in line with the standard, so it would retry the failed checks when things actually start on the other side. Another way we could fix it would be to change the way ICE works for Jingle in Asterisk and only start the ICE session once we've sent the call acceptance back to the caller. That way both sides will be starting their ICE around the same time, and so they'll likely line up and actually agree on something. The problems were that the first solution involved changing code in pjproject, which was annoying, and the second one was a somewhat involved change to how Asterisk's Jingle integration worked. Instead I opted for a worse, but far easier, solution which was to simply increase the timeout to 45 seconds. This dodged around the problem by assuming that Asterisk wouldn't have considered that candidate failed by the time the person actually answers the call. That way it'll still send the triggered check, and all will be good with the world.

So I rolled that out, and it actually worked! We plan to return to this issue and build a real solution, but for now it allowed us to keep testing.

Mission Accomplished

Mission Accomplished banner on US Navy Ship

So, I now had Movim successfully and consistently making calls out to Asterisk, and thus to real humans' phones. I told my coworkers I had done it, I had found the problem and fixed it, and all was now well. So we tested it with Conversations, the Android client that we expect many of our users to use. Nope. Just as broken as ever. Ok, maybe I was a little hasty... What about Gajim, another desktop client? Busted. Ok, what about Movim on Firefox, where I started before switching to Chromium just for its dev tools? Totally broken.

Ok, so... maybe there was still a ways to go... Don't worry, though, dear reader. The next fixes won't take as long to explain.


I started with Gajim. It's not more important than the others, but when I was testing I noticed Gajim actually printing a useful looking error message in its error log. That's a very nice place to start! It also wasn't mad about the ICE stuff at all, but before that. When the clients are back at the beginning of the setup they negotiate what kind of data is going to go over this call. Is it video, or audio, or file transfer? If it's audio, what format of audio is it, is it stereo, what levels of quality does each side support? These kinds of things, where the two clients are trying to come to a consensus on how we're actually going to go about transmitting audio data, once ICE figures out the connection itself. In Jingle the way this works is that during session initiation you can specify the kind of content you want in this session, or if you want to add content to an existing session you can add new content to a session. From then on we can talk about that content by its name and the person who created, "initiator" for the person making the call, and "responder" for the person who has been called. That's mostly there to prevent a case where Romeo starts a session with Juliet and then they both propose a new audio stream at the same time, and then each side thinks they're negotiating about their own proposal, when in reality there are two proposals. With the creator it becomes clear, they both are talking about their own audio content, and further negotiation is required.

That being said, the code in Asterisk seemed to feel it was always the creator of the audio content. The code was written to send a creator of "initiator" for an inbound call, and "responder" for an outbound call. For inbound calls, Gajim agreed; Asterisk would propose a new session with audio content, and so they were the creator. For an outbound call, though, Gajim would do the same and propose a new session along with audio content to go along, but Asterisk would respond back about an audio stream that Asterisk itself had created. But it hadn't created one, Gajim did. So I made it so Asterisk just always used the "initiator", which seems to match how clients actually established sessions. Incidentally, the reason I didn't notice this before is because Movim doesn't care about the creator field and also just assumes "initiator".

So after that change it now worked in Gajim!

Firefox and Logging

Movim on Firefox was harder to debug. When I was looking at Wireshark, I just saw... nothing. There was occasional single things that would go out, but basically it looked like ICE wasn't doing anything. That made it hard to debug...

By now, though, I'd found where ICE actually lives in the code, specifically over in pjproject and not the main Asterisk code. I'd gained some experience reading that code and working over there, and while I was reading that code I noticed some parts of the code did actually already have logging code. If I could just figure out how to turn it on, it may tell me more about what the code was thinking. I'm embarrassed to say it took me a good while to figure out how to get those logs turned on.

I eventually found some forum post somewhere outlining the simple steps I needed to use to enable logging of the data I wanted:

# First we get to the asterisk command shell
$ sudo asterisk -r
# Then we add a new "logging channel" that will include debug logs
> logger add channel some_filename notice,warning,error,debug
# Then we set core (that is, Asterisk) to log up to debug logs
> core set debug 5
# Then we set pjproject to also log debug logs
> pjproject set log level 5
# And then tell pjsip (not sure why it's not pjproject) to actually log
> pjsip set logger on

Not sure why I didn't just guess all of that...

But anyway, now I could run my tests and it would log out to /var/log/asterisk/some_filename! I will admit, it is nice that I wasn't filling the normal log files with junk and could actually see only my test, rather than wandering through days of logs looking for my portion.

When I was done, I could do the reverse (also from the asterisk command shell):

> pjsip set logger off
> pjproject set log level default
> core set debug off
> logger remove channel some_filename

This would stop putting new logs in my some_filename file, but wouldn't delete it. This is also convenient because I could now search through this file without it getting infinitely longer, or filling up with logs I wasn't interested in.

That being said, even the short file for a test that takes a minute can have thousands of log lines, so it still takes some sifting to find the actual information I'm looking for.

I noticed a few important things looking through the logs. The first, and most obvious, thing is that it builds an ICE session many many times. It'll build one, tear it down, build another one, then tear that down, within a second. This made it hard to follow the history of a single session, but was also very obviously something that might be a problem. The second issue I noticed is that all of the sessions got to a point where they said "Resetting ICE for RTP instance", but some of them said "Nevermind. ICE isn't ready for a reset" afterwards, and then things didn't seem to work after that. All the broken ones had "comp_id=2", which meant they were for the second component. Comparing the same logs with Movim on Chrome, there was no second component. Huh.

So what is a component? ICE has a section in the spec for negotiating multiple independent ports in a way where either they both work or the whole negotiation fails, which could be used by applications which need multiple ports to work in coordination for anything to work. The protocol that WebRTC offers for audio is called RTP (Realtime Transport Protocol), which has two modes of operation. Originally RTP had two connections, one where it would send the audio data, and another called RTCP (RTP Control Protocol) where it would send information about how well the audio was sending so the participants could adjust their quality or something. A later version of RTP added an optional feature called rtcp-mux, which allowed the sending of the RTCP information along the same connection as the audio so we only need one connection, and so only one ICE component. Well, when WebRTC was standardized it was decided that WebRTC required the RTP implementation to support rtcp-mux in order for it to be allowed as part of WebRTC. So in Chromium they take advantage of that and just assume it supports rtcp-mux and only start ICE for one component. Firefox, though, felt it was important to be more backwards-compatible and tries to support both rtcp-mux and traditional RTP+RTCP modes. There's a way to tell if the other server supports rtcp-mux, but that information is sent when the other side answers the call, and by then Firefox has already sent all of the candidates for both components.

Ok, so that's why Firefox acts differently than Chrome, but why is it a problem? Surely it should be fine to negotiate two components and just ignore one, and ICE should work fine in either case. Well, that comes down to the constant building and rebuilding. The way original ICE works, the full set of candidates is gathered by both sides and then exchanged, they're all processed, and then a winner is picked. Jingle, though, made a change where it would send each candidate as it discovered them. That way the ICE session can start sooner and can be looking for candidate pairs while the STUN and TURN stuff is going on, and also if a candidate is found right away it might be decided it's not even worth it to get a TURN candidate since it'll be lower priority than the valid pair we've already found. That method of operating eventually got its own draft standard under the name Trickle ICE, which is similar but slightly different, which means there's a new version of Jingle that's meant to bring Jingle up to date on new versions of ICE including Trickle ICE. It's a draft, though. All that is to say, things are kind of a mess, and the version of ICE that Asterisk supports right now is not the Trickle kind. That's a feature that's being worked on for the future, but it's not released or supported by the XMPP integration at the time of this writing.

So, if Jingle uses Trickle ICE, but Asterisk doesn't support that, how does the XMPP integration in Asterisk work? Well, every time it sees a new candidate trickle in, it just restarts the ICE session as though that candidate is the only one. It's not ideal, but it appears to work a surprising amount of the time! That's likely owing to ICE's flexibility when it gets a request from a pair it doesn't know about, how it just adds it to the list. As long as one side knows all the candidates, the other side should respond properly and come to a consensus. Weird, but fine. Maybe in a newer version of Asterisk that will get better. But in this case Firefox has two different components, and it sends the "component 2" candidates after the "component 1" candidates. So Asterisk will get one candidate and will setup an ICE session, and then immediately gets the component 2 candidate so it tears the first one down and sets up the new one. But this new one only has candidates for component 2. That's something ICE doesn't allow, so Asterisk's ICE session isn't in a "nominating" mood, so it just doesn't do anything. That's why Firefox doesn't appear to be able to negotiate a connection.

So now that we know the problem, how to fix it? Well, we could implement a sketchy version of Trickle inside Asterisk, or even just inside the XMPP integration code for Asterisk. That looked like it was likely to introduce new bugs, and I knew that someone was already working on a real version of Trickle that may just work later. And even though it's nearly broken, it also works in Chromium every time I tested it. I really just wanted Firefox to work the same way Chromium does, so that's what I built. I just put some code in the XMPP integration part of Asterisk that ignores candidates for component 2. I know they're not going to end up using it either way, because all of our clients support rtcp-mux, and this is basically one line of code that is highly unlikely to introduce any new bugs!

Now Movim on Firefox works too! Three down, and one to go!


This was it, our most supported client, and if it didn't work here we couldn't really call it a feature. And it still isn't working after all those other fixes. It's hard to use Wireshark because the app is running on my phone, but I now have the power of Asterisk logs at my disposal! Using that I took a look and... everything looked pretty good. Candidates were being exchanged, and negotiated, and being chosen as valid and then... just sitting there. None of the valid options were ever being nominated, so both sides were just waiting until someone gets bored of waiting and ends the call. I decided that this was really interesting, but I might need Wireshark to see more. Great.

There's probably some way I could have networked my phone through my computer or something to use Wireshark, but if I found that I needed more logging or something it would probably be useful if I could build the code for the Android app. And if I can build the code, then I can just run the Android Emulator which lets me run the app on my computer in a fake phone environment. So I pulled down the code for Conversations and got it setup and working in Android Studio. Now it was as easy as clicking the "play" button and running Wireshark to watch the packets the virtual-phone was sending. Since I already knew what to look for from the Asterisk logs, it was pretty easy to look at the stream of packets and see that neither side was including the "use-candidate" attribute which, if you remember, is how the controlling side tells the controlled side which candidates we're going to go with. That would explain why inbound calls work with Conversations, because for inbound calls Asterisk is the controlling side, and includes that "use-candidate" value so we're all on the same page, but when the roles are reversed and Conversations is the controlling side it never nominated anything. That's weird. Unlike with the Firefox side, it's not like it was doing nothing; it was definitely making requests to find a list of valid candidates. And unlike the original Chromium problem triggered checks were coming back just fine, and both sides appeared to know what the valid list is. After combing through packets, I did manage to find one weird thing. Wireshark knew about the attributes in STUN and could pull those values out of the packet and show them to me, but some of the packets around the time I might expect nomination to start had some extra attributes. Values Wireshark didn't recognize and listed only by their type, "0xC001". It's possible it was nothing, but it was something the working implementations didn't do, so it was a thread I had to pull on.

I pulled down the code for libwebrtc, the WebRTC implementation Conversations used, and searched through the code for that value and saw it was associated with STUN_ATTR_NOMINATION. Continuing to trace through the code for where this value was used, and which sets of conditions lead to that code, and so on, I eventually found that there is an option libwebrtc supports called "renomination". I manage to find what could be called a standard only by the most generous definitions, ICE Renomination: Dynamically selecting ICE candidate pairs. This document doesn't actually mention which STUN attributes to use to do the nomination, saying only "we define a new STUN attribute", but the code I was looking at seemed to line up with the intent of the document, at least.

The intention of this standard is to make it easier to control which pairs get nominated by ICE. In base ICE we test to find valid pairs, then some are nominated, and then the nominated pair with the best priority is chosen. Renomination is trying to make it easier for the controlling side to change its mind, for example if a WiFi device goes out of range during negotiation, making the cell network candidates better choices, or something. I'll be honest, I'm not really sure how often this would really apply, because ICE doesn't run for very long. Either way, though, the way this proposed extension handles this "mind changing" is by replacing the "use-candidate" attribute, which is either present or absent, with a different "nomination" attribute that contains a number. This number is more important than the priority, so the controlling side can nominate one pair with value 2, and then later nominate another pair with value 3 and that will now be the best choice no matter what priorities say. So, that's what the "unknown" attribute I saw in Wireshark, and also why no candidates were ever chosen. Conversations was sending one attribute to nominate, and Asterisk was waiting for a different one.

But why? There's a part of the spec that says that renomination is only turned on if both sides include the renomination option in their initial candidate exchange. Well, Asterisk definitely doesn't support it, so what's up? Looking at the Conversations code it looks like the issue is that there's no room in the Jingle standard for exchanging whether or not someone supports an option like this. There just isn't a value for that. So in order to allow it, Conversations just assumed all call partners supported it and included that value in both sides of every session it setup. It also seems like libwebrtc isn't very picky about this. This means if a user running Movim on Chromium is being called by Conversations, Conversations will think both sides supports renomination and Movim will think neither side does. But despite this, when Conversations sends the special attribute instead of the "use-candidate" attribute, Chromium still understands it and doesn't even check if it should expect that kind of value from this session. It just knows what was meant. So this is why Conversations works with Chromium, but any standard implementation of WebRTC that didn't support the draft extension of Renomination would not work. Like Asterisk.

To fix this I did two things. The first thing I did was talk to the Conversations developer about how we can be more careful about when we include this option, and was told we could just never include it. They just didn't have strong opinions on it. So I made a patch to remove that option. That was the easiest fix I've had to make yet. The problem is that unlike the changes I'd been making to our Asterisk server, this code ran on our users' phones. We couldn't control when it would be released, and we couldn't control when our users would install the new version even after release. Since then, that change went out in version 2.9.8, but we didn't know that at the time, and wanted this to work as soon as possible. So, like the Firefox fix, I considered building an implementation of renomination in Asterisk, but after looking at my Wireshark it didn't seem like it was necessary.

Since it was just a stopgap measure until our users migrated to newer versions of Conversations I did the simplest thing that would work, which is to just treat the "renomination" attribute (0xC001) exactly the same as the "use-candidate" attribute; ignoring the number, and the purpose of the extension, entirely. This would break if Conversations ever actually tried to renominate a candidate to a new pair with lower priority, but how likely is that? Probably not very likely. And if they only ever nominated once, or tried to renominate to a higher-priority candidate, then it would just work! The only quirk here is the one I mentioned earlier, which is that ICE is actually implemented in a library to Asterisk called pjproject. I really didn't want to fork that library and cut a new release just for myself with my garbage change in it, solely so I could put it in our Asterisk implementation. Luckily this apparently isn't the first time Asterisk has needed to make some tweaks to pjproject, so there was already a system for that! If I made a git commit in the pjproject repository, and then put the diff from that commit into a folder in a magic location in the Asterisk repo then during the Asterisk build it will make that change to the library for me before building! That meant I could still use an official release of pjproject, and could keep all of my ugly changes together in the Asterisk codebase.

I tested both changes, and they both worked. If the client didn't specify "renominate" as an option, then it spoke standard ICE and would work without my change. And if I left the client as-is it would work with my altered server that pretended to know what renomination was. The phone calls went through, and Conversations worked! All the clients worked!

Somehow Wrapping Up

Well, if you're at the end of this, thanks for following my on my odyssey. We began at the banks of "this is 90% working, there's probably just one weird typo to fix to get it the rest of the way". From there it was down the river of standards and the rapids of interoperability of standards between browsers and old C projects, aided all along by my compass Wireshark. I was confident when the rapids calmed down and the one problem was fixed, but there was a world of trouble up ahead as I didn't see the waterfall coming. My raft came out the other side, a little battered but with things holding together. Calls were being made, and metaphors were being thoroughly stretched. Seriously, why would I need a compass to raft down a river? IT ONLY GOES ONE WAY.

All in all, none of these fixes were "the proper fix", but with them I got the system working. I plan to engage the Asterisk community to open discussions about the actual issues, rather than just producing a patch no one wanted, but in the meantime calls are being made and received. And I got to stretch my legs and learn a bunch of inner workings, which is my interest. I even got a blog post out of it...


  • Mission Accomplished banner by U.S. Navy photo by Photographer's Mate 3rd Class Juan E. Diaz. (RELEASED) - Source, Public Domain

by 0 at August 17, 2021 14:51

August 16, 2021

Monal IM

Monal 5.0.1: Synchronized builds and bugfixes

We have released Monal in version 5.0.1 which contains mostly corrections and small improvements. Now the iOS and macOS builds are also synchronized and available in the Apple App Store.

Here are the changes in this release:

  • Show warning if camera permissions are missing while trying to use camera
  • Fixed duplication of contacts in chat overview
  • Fixed some crashes
  • Show Debug menu after tapping 16 times onto app version
  • Don’t drop file download errors silently
  • Don’t log outgoing SASL and password change stanzas (your password won’t be logged anymore)
  • Trim whitespaces and newlines at the beginning or end of a message
  • Fix microphone icon not always showing
  • Renamed “Log” to “Debug” in settings menu
  • Move contact details close button to the left
  • Fix some very rare TCP stream handling bugs
  • Fix old XMPP resources created with Monal older than version 4.3 not having a random part
  • Fix bug in upload queue not reacting to enter key
  • Privacy: Only register to APNS and push appserver if notifications are allowed
  • Fix bug in Message Archive Messaging (MAM) handling with ejabberd

Monal is being developed entirely in spare-time of the named developers and has no commercial interest – for the freedom of software but also to enable ad- and tracking-free communication. Therefore, we kindly would ask you to consider to make a donation.

If you have hardware to donate please reach out to us first! Last but not least, you can check out the new features and give us feedback, that helps a lot to improve the app. Read about how to support!

Finally, do you know someone you could imagine to volunteer in support visual design and improvement of the app’s interface? Then please reach out to us as well.

Spread the word! We have this blog but also a Mastodon account, Twitter account and you can read this via the Planet Jabber RSS feed.

Development is conducted via GitHub.

Let’s change the digital communication via XMPP in the Apple environment, together!

Your Monal IM developers!

by emus at August 16, 2021 20:04

August 15, 2021

Peter Saint-Andre

Meditations on Bach #7: Aristotle and Bach

On pp. 169-174 of his book Bach: The Learned Musician, Christoph Wolff describes the genesis of Bach's musical thinking. Of particular interest to me is his recounting of some insights from Johann Nikolaus Forkel, who founded the field of musicology and wrote the first biography of Bach in 1802. Wolff writes as follows....

August 15, 2021 00:00

Aristotle Research Report #16: The Sources of Beauty

Aristotle uses the word καλός in both an aesthtic sense and an ethical sense. This has caused confusion among translators and commenters alike. Should the word be translated as "beautiful" when talking about art but as "right" or "fine" or "noble" when talking about character, intention, and action? Did Aristotle think that works of art were inextricably tied up with morality or that traits of character were aesthetic in some way? Let's look into the matter....

August 15, 2021 00:00

August 14, 2021

Peter Saint-Andre

Meditations on Bach #6: Five Strings?

Although the first five of Bach's suites for unaccompanied cello lie quite naturally on the bass (when tuned in fifths, that is!), the sixth suite in D major (BWV 1012) is a slightly different story because it was originally written for an instrument with an added string above the usual four. The exact identity of this instrument remains a mystery - some think it was written for a viola pomposa or viola de spalla, others for a violoncello piccolo (not that any of those instruments are well-understood). Whatever the truth of the matter, playing music written for a five-string instrument on a four-string instrument introduces new challenges: in particular, it requires intricate playing high up on the fingerboard. Modern cellists try to overcome this challenge through heavy use of thumb position, an innovation that post-dates Bach's lifetime; however, that doesn't make the task much easier. While working on the prelude to the sixth suite, I've realized that playing it on a five-string electric bass would make a lot of sense. Ideally such a bass would be tuned in fifths with a high E string, C-G-D-A-E. This seems achievable by using strings for a six-string bass and discarding one of the strings; for example, La Bella makes a six-string set normally tuned B-E-A-D-G-C and I would tune B up to C, discard the E string, tune A down a step to G, keep D as-is, tune G up a step to A, and tune C up two steps to E. After conferring with Marek Dąbek of Stradi Basses on whether the high E string will work, I'm happy to report that we're transforming the "Mocha 4" into a "Mocha 5"....

August 14, 2021 00:00

August 10, 2021

Ignite Realtime Blog

JSXC Openfire plugin 4.3.1-1 released!

The Ignite Realtime community is happy to announce the immediate availability of version 4.3.1 release 1 of the JSXC plugin for Openfire, our open source real time collaboration server solution! This plugin can be used to conveniently make available the web-based JSXC client (a third-party developed project) to users of Openfire.

The upgrade from 4.3.0 to 4.3.1 brings a small number of changes from the JSXC project which appear to be mostly bug fixes and small improvements. Please review the changelog for more information.

Over the next few hours, your installation of Openfire should automatically detect the availability of this new release. Alternatively, you can download the new version right now from the plugin’s archive page.

If you’re interested in engaging with the community that builds Openfire and its plugins, please come join us in our forum or chat room!
For other release announcements and news follow us on Twitter

7 posts - 2 participants

Read full topic

by guus at August 10, 2021 08:20

August 07, 2021

Peter Saint-Andre


This piece of light verse popped into my head the other day....

August 07, 2021 00:00

August 06, 2021

Erlang Solutions

Why Build A Bank In Elixir – Memo Bank’s Story

Elixir is a programming language that runs on the BEAM VM – the same virtual machine as Erlang – and can be adopted right throughout the tech stack. Elixir is designed to combine Ruby’s familiar syntax with the guaranteed performance, scalability, and resilience of Erlang. When choosing a programming language for software development within FinTech and financial services, uptime and fault-tolerance are of mission-critical importance and this is where Elixir will often be the right tool for the job.

Here we take a look at the success story of Elixir in FinTech with Memo Bank, the first independent bank to be created in France in the last fifty years, that has just completed a new fundraising round of €13 million. 


‘The kind of bank we wanted simply didn’t exist; so, we decided to build it.’

This was the ambitious starting point for the genesis of Memo Bank, we will examine why and how they chose to build a fast, innovative, and secure banking system from scratch using Elixir as the programming language of choice.

Memo was founded in 2017 and serves the European small and medium businesses (SMB) market, helping them to manage cash flows and fund their growth as a bank ‘designed by business people for business people’. The French based bank provides all the services you’d expect from a business bank, from current accounts to credit lines. 

Why Memo Chose Elixir

There are 2 wonderful first hand accounts detailing the planning and architecting stages of the early days of Memo which you can find on their Medium profile. This section summarises their story.

As Jérémie Matinez explains in his post ‘Why Elixir? An alchemy between backend and banking’, the advantage of building from the ground up, was that they could incorporate the most efficient procedures and modern technologies into their systems from the very start. The guiding principle was that, when it comes to banking, data (its accuracy, accessibility and security) were of paramount importance. To comply with financial services regulations and for customer trust, Memo needed to build a system that is always available anytime, from any device. Combined, these mission-critical requirements led to the decision to adopt Elixir for the core banking system and all of the other backend applications. 

Memo was particularly attracted to building in Elixir to leverage its immutability for ease of development and easier concurrency and testability, necessary when needing 100% reliability for a FinTech system. They also pinpointed the language as being able to reach the level of system scalability and availability needed to absorb real-time transactions.

What else did they like?

The good trade-off between performances and high level features

Memo found that Elixir offered them the perfect balance between performance and features to provide them with the reliable structure to facilitate their ambitions at scale and provide high availability for reliable real-time transactions.

Being built upon Erlang and targeted to run on the BEAM virtual machine, Elixir shares its performance and ecosystem advantages and concurrency model as standard. ​Also, Erlang is part of the exclusive club of the nine nines languages which makes it one of the most available and reliable platforms out there.

The growing and solid community

The Elixir developer community is very friendly and welcoming and the ecosystem is growing all the time. Memo’s team found that although it is a relatively new language, it does enjoy the availability of any tool that is written originally for an Erlang codebase. They also mention some of the extremely powerful frameworks that allow full-stack development, most prominently Phoenix Live View for web application development. 

Join us and the rest of the European Elixir community September 9-10 at ElixirConf EU hybrid conference in person in Warsaw, Poland or online.


Elixir is built on top of really mature and stable tech (Erlang and the Beam VM) with good documentation, which is a big plus point for any startup looking to grow trust within their customer base. It is used in many other successful FinTech companies such as Klarna and SolarisBank making it a battle-tested language for innovation in the industry.

There are some major positives to opting for Elixir which meets the requirements of startups and scaleups in the space:

  • Scalability  – If you reach your goals of millions of users, then you’ll have benefited from the reliability and scalability of the BEAM VM
  • Concurrency – handling a lot of simultaneous requests reliably such as from spikes in transactions
  • Easy to develop and maintainable – ease of usage and fast development for everything from fixing bugs to adding new features

Overall what you gain from using Elixir for your FinTech project is a smaller more manageable codebase that is faster and significantly more reliable when compared to systems built in other programming languages.  

We worked with Bleacher Report, the leader in providing real-time, social-first, mobile-first sports content, to help them move their system from Ruby to Elixir. We achieved a 10x reduction in the time it takes to update the site for a system moving from 150 servers to just 8. Check out the case study here. 

‘They were able to come in with their expertise, help us establish best practice and give us confidence that going forward our systems would be efficient and reliable.’

Dave MarksSenior Engineering Director @ Bleacher Report


Overall the Memo Bank system uses modern tools and processes and is designed for speed. The transactional core of Memo Bank is now fully powered by Elixir to deliver on its mission of maintaining customer account records with the highest possible availability and reliability. The Core Banking System is easily adaptable to new customer needs and produces accounting and regulatory reporting in real-time.

What this success story shows is that whether you need to start from scratch or you already have an infrastructure to integrate with, Elixir is a proven, sound technological choice to build software that will adapt to your business and stand the test of time and scale.

The Erlang Solutions team has worked closely with the Elixir core team since its inception. Whether you’re new to Elixir, looking to grow your team, add new functionality or integrate with a new system, we’re here to help you make it happen. Tell us about your project requirements here.

The post Why Build A Bank In Elixir – Memo Bank’s Story appeared first on Erlang Solutions.

by Michael Jaiyeola at August 06, 2021 11:16

August 05, 2021

The XMPP Standards Foundation

The XMPP Newsletter July 2021

Welcome to the XMPP Newsletter covering the month of July 2021.

Many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, especially throughout the current situation, please consider to say thanks or help these projects!

Read this Newsletter via our RSS Feed!

Interested in supporting the Newsletter team? Read more at the bottom.

Other than that - enjoy reading!

Newsletter translations

Translations of the XMPP Newsletter will be released here (with some delay):

Many thanks to the translators and their work! This is a great help to spread the news! Please join them in their work or start over with another language!

XSF Announcement

Currently, the XSF members are voting on new members and reapplying members. The member meeting will be hold on August 19th 2021, 19:00 UTC to formally approve the voting results. XSF group chat (MUC) If you are interested to joining the XSF you can apply beginning of Q4 2021, too!

Since this month a new sub-domain is also available at Many thanks to MattJ! The first data project being hosted here for everyones access are the providers lists (JSON format) from the XMPP Providers project. There are already first client implementation using these - please review the criteria and add you service as well via the Gitlab repository! Feedback welcome!


XMPP Office Hours each week - Also, checkout our new YouTube channel!

Berlin XMPP Meetup (remote): Monthly Meeting of XMPP Enthusiasts in Berlin - always 2nd Wednesday of the month.


XMPP Office Hours: Building a Chat Bot on Ad Hoc Commands


With the very first article we would like to bring attention to a general serious topic: Burnout in open-source communities. Please take care of yourself, seek help and also have an eye even on your virtual colleagues! Searching for help in your current location may be more satisfying, however this could be a start:

The Debian XMPP Team blog announced all the goodies the soon to be released Debian 11 will bring. While these might not be ‘new’ for newsletter readers, they’ll improve the experience for users of Debian Stable significantly.

Seth Kenlon, from Red Hat, published two articles on the XML markup language (a quite important thing in the XMPP world ;) ) at Starting with What is XML? and following up by Use XMLStarlet to parse XML in the Linux terminal.

Software news

Clients and applications

Gajim News: Development on the new Gajim version continued in July, bringing many fixes and improvements. Also this month: WebSocket improvements and a new python-nbxmpp release.

Profanity 0.11.0 is out, bringing six months of polishing to 0.10.0. This includes message archive management (MAM) support (still experimental), support for changing the password, abilities in group chats (MUC) like voice and registration, OMEMO trust mode, private messages (MUC-PM) in public channels, spam reporting, server contact discovery, and much more.

Jan-Philipp Litza built an XMPP feed integration for the German official warning app NINA: Find the Github repository here. One can simply add the bot and register coordinates of interest. May it never contact you!

UWPX v. has been released. This release mostly focuses on bug fixes for the first beta release of UWPX with an ETA of 01.09.2021 and proper push support even if the app is not running. For this COM8 has been working on its C++ push server for the last couple of months and it is finally up and running. Besides that, this release also includes XEP-0085 (Chat State Notifications) improvements with a proper typing indicator and status messages.


ejabberd 21.07 has been released with a plethora of fixes and improvements, so be sure to read the changelog if you’re using shared groups and MySQL. Big changes have been implemented pertaining to the build system, as ejabberd can now be built using rebar3 and Elixir Mix.

For OpenFire an update for the plugin ‘inVerse’ has been published and it makes the Converse.js web client available for its users.


python-nbxmpp 2.0.3 has been released.

Mellium Dev Communiqué: Development continued apace this month and included the usual assortment of bug fixes and improvements. In addition, carbons, MUC, and Roster Versioning were all implemented!

Smack, a Java XMPP client library has been released in version 4.4.3 with mostly bugfixes.

Extensions and specifications

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).


The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • Disco Feature Attachment

    • This specification provides a way to indicate that a feature is implemented for a specific namespace
  • Pubsub Caching Hints

    • This specification provides a way to get caching information from a Pubsub node


  • No new XEP this month.


If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.


  • Version 1.0.0 of XEP-0429 (Special Interests Group End to End Encryption)

    • Accepted by Council (XEP Editor: jsc)
  • Version 0.2 of XEP-0413 (Order-By)

    • Add a way to discover on which protocols Order-By applies
    • Remove references to SQL (except in implementation notes)
    • Specify that order-by operate on the whole item set and inside a RSM result set
    • Explicitly says that creation and modification dates are set by Pubsub service itself
    • Specify that Clark notation should be used for extensions
    • Add a full example with Pubsub and RSM
    • Add hint for SQL based implementations removed XEP-0060 and XEP-0313 as dependencies, they are mentioned as use cases, but are not mandatory
    • Better wording following feedback
    • Namespace bump (jp)
  • Version 1.0.0 of XEP-0381 (Internet of Things Special Interest Group (IoT SIG))

    • Accepted by Council (XEP Editor: jsc)
  • Version 0.2.0 of XEP-0383 (Burner JIDs)

    • Improve security considerations and add listing JIDs. (ssw)
  • Version 0.2.0 of XEP-0458 (Community Code of Conduct)

    • Integrate various comments from various sources (dwd)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Draft.

  • No Last Call this month.


  • No Draft this month.

Call for Experience

A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

  • No Call for Experience this month.

Thanks all!

This XMPP Newsletter is produced collaboratively by the XMPP community.

Therefore many thanks to Adrien Bourmault, Benoît Sibaud, DebXwoody, COM8, emus, mattJ, neox, Licaon_Kter, pmaziere, raspbeguy, wurstsalat3000, Ysabeau for their support and help in creation, review and translation!

Spread the news!

Please share the news via other networks:

Find and place job offers in the XMPP job board.

Also check out our RSS Feed!

Help us to build the newsletter

We started drafting in this simple pad in parallel to our efforts in the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. We really need more support!

You have a project and write about it? Please consider sharing your news or events here, and promote it to a large audience! And even if you can only spend a few minutes of support, these would already be helpful!

Tasks which need to be done on a regular basis are for example:

  • Aggregation of news in the XMPP universe
  • Short formulation of news and events
  • Summary of the monthly communication on extensions (XEP)
  • Review of the newsletter draft
  • Preparation for media images
  • Translations: especially German and Spanish


This newsletter is published under CC BY-SA license.

August 05, 2021 00:00

August 03, 2021

Jérôme Poisson

Libervia progress note 2021-W31


last weeks have been exhausting with lot of things at the same time. I've been preparing the release of the 0.8 version, and I wanted to have a couple of thing ready for that, notably a proper way to do translation.

Preparation of 0.8

As you may know, I've implemented a docker integration into Libervia to be able to run relatively easily third party software. This is working, but when testing in the production website I had to put the finishing touches to make it work (notably I've improved HTTP proxy and HTTPS management). I have then created projects and updated a couple of translations files.

As you can now see on, there is a translate menu. Unfortunately I've closed the account creation for the moment, as I have to deal with licensing first. Indeed, nearly all Libervia ecosystem is for now in AGPL v3+, as there are only a few contributors (2 mains one, then only a small patches). The intent was and still is to be sure that the ecosystem stays in an Libre license, but this license may cause trouble in some edge cases, notably if we want to make an iOS frontend (the fruit store is notoriously causing trouble with AGPL licences).

Thus, I'll bring the subject at next general assemble of the "Salut à Toi" association, and see what we should do. One option could be to use FSF's Fiducial Licence Agreement to let the association the possibility to modify the licence as long as it stays a libre one. It would then be possible to add an exception for an iOS frontend. An other would be to avoid totally iOS. Anyway, this need some time and discussions, and if I open translations and get several contributions under AGPL v3+, it may be harder to set this up.

Weblate integrated in the official website

An other time consuming task was to continue with renaming and adapt package names (notably in Pypi). I've used a little trick to redirect legacy packages to the new ones: a new version of each legacy package is a simple depending on the new package (you can see it e.g. for sat package). I've also put in place a redirection on the Mercurial repositories, using the old repos will redirect to the new ones.

Finally, I've published the 0.8.0 beta 1. You can install it easily with pipx:

  • First install pipx as explained in its documentation
  • Then install the backend with pipx install libervia-backend. You can follow the documentation to see how to configure it and launch it. This will include the CLI and TUI frontends.
  • If you want to test graphical frontends, you'll have to install Libervia Media with hg clone (assuming that you have Mercurial already installed), then add it into your libervia.conf
  • To install the Desktop frontend, use pipx install libervia-desktop
  • To install the Web frontend, use pipx install libervia-web

Note that the Desktop frontend is still for early adopters, I need to refactor message handling and do some optimisation and stabilisation to make it pleasant to use.

Please send feedbacks either as bug reports/feature requests on the official bug tracker, or on the XMPP chat room at I plan to only fix major issues though, as I'm now fully working on 0.9 and I'm focusing mainly on the ActivityPub gateway. However, bug reports/feature requests will be taken into account for 0.9 if not fixed directly in 0.8.

ActivityPub Gateway

After the hard work to move 0.8 close to the finish line has been done, I've started to work on 0.9 and thus the ActivityPub gateway. The first major thing to do was a refactoring of offline storage. Indeed Libervia (or SàT at the time) was started a long time ago with an Async framework (Twisted) long before asyncio even existed. SQLite has been chosen as backend to store data, and a hand made module based on Twisted's adbapi has been created. Despite the rough edges is has been working quite well all this time, and there was even a semi automatic way to update schemas between version. But the whole thing was starting to be difficult to maintain, and the schema update instructions were all kept in the same file.

Fortunately, SQLAlchemy, the most famous SQL databases abstraction with an optional Object Relational Mapper has recently added support for asyncio.

SQLAlchemy is a very powerful and widely used tool, so it has been a quite obvious decision to use it to replace the old system. But to use it, Twisted needs to use an asyncio loop, and Libervia was using GLib loop (or reactor in Twisted terms), due to the dependency to dbus-python.

Dbus-Python is, by its own authors words, not be the best D-Bus binding to use due to unfortunate design decision, so it was the occasion to replace it, and I've moved the backend to TxDBus, a Twisted native D-Bus implementation, which can run with any Twisted reactor. For technical reason, dbus-python is still used for frontends at the moment, but I plan to completely replace it before the end of 0.9 development.

This has required some work, but it was worth it, and after that I could switch to asyncio reactor and implement SQLAlchemy. I've decided to go with the ORM and not the core only as it is opening neat possibilities. I've first made a mapping corresponding to the last version of the database used for Libervia 0.8.

Once SQLAlchemy has been implemented and working, the next obvious decision was to use Alembic, the recommended SQLAlchemy based database migration tools (by the same authors). Thanks to this, migration files are now in separated files, and are really easy to create (Alembic can autogenerate a good part of a script when a migration is needed).

Thanks to all this, I can now easily make changes in database (while in old system I was hesitating due to the work implied). SQLAlchemy also paves the way to support other databases than SQLite. Even if I'm currently sticking with SQLite only, to keep focus, I'll probably add support for PostgreSQL and MariaDB/MySQL at some point.

Once all this work on storage backend has been finalised, the pubsub cache has been implemented.

Pubsub cache is operated transparently for end-user, and stores locally pubsub items (according to internal criteria). This is useful for many reasons: performances of course, but also it allows to do data analyse/indexing, for instance to retrieve all items with some terms (e.g.: to search by categories or hashtags). Pubsub cache is also useful to store data in a component (what is of interest for the ActivityPub gateway), or to store decrypted data (which will be of interest when we'll work on the e2e encryption with pubsub).

I'll pass the implementation details, you'll find the code on the 0.9 bookmark, notably in the pubsub cache plugin, and I've written documentation for developers for some explanations.

New commands has been added to libervia-cli to manage the cache, in particular there is a purge command to delete items according to given criteria, which will save resources and disk space. With it, it's possible to delete only certain types of items (e.g. only blog posts), for all or only some profiles (for instance, only for the AP gateway). You can say a limit (e.g. delete all items which have not been updated for 6 months). Here again, documentation has been written to explain the commands.

While doing all that, I've noticed problem to cache correctly items (because of the flexibility of XMPP Pubsub, it's hard to impossible to say if we can share cache between users), thus I've written a protoXEP (i.e. a proposition for an XMPP Extension Protocol, or XEP) to fix the problem:

I've also submitted a pull request to fix a problem in XEP-0060 (Publish-Subscribe).

While I was a working with standards, I've updated a XEP I've authored a couple of years ago to specify order of items: XEP-0413: Order-By.

Last but not least, while doing the tests for the pubsub cache I've created some helping methods (or fixtures in pytest terms) to help doing unit test.

This concludes the first step of the XMPP-ActivityPub gateway which was, as anticipated, a big one. The following steps should be done more quickly, and work on 0.8 should not be on the way anymore (I plan to publish 0.8 in early September).

That's all for this note, see you next time.

by goffi at August 03, 2021 11:03

Prosodical Thoughts

Prosody 0.11.10 released

We are pleased to announce a new minor release from our stable branch.

This release primarily fixes CVE-2021-37601, a remote information disclosure vulnerability. See the previously released advisory for details. We recommend that all deployments upgrade if they have not yet applied the mitigation described in the advisory.

A handful fixes for issues discovered since 0.11.9 are also included.

A summary of changes in this release:


Minor changes

  • prosodyctl: Add ‘limits’ to known globals to warn about misplacing it
  • util.ip: Fix netmask for link-local address range
  • mod_pep: Remove obsolete node restoration code
  • util.pubsub: Fix traceback if node data not initialized


As usual, download instructions for many platforms can be found on our download page

If you have any questions, comments or other issues with this release, let us know!

by The Prosody Team at August 03, 2021 10:13

August 01, 2021

Peter Saint-Andre

Aristotle Research Report #15: Taking up Aristotle's Causes

In recent times, Aristotle is often criticized for holding back the progress of science for two thousand years. (As Armand Leroi argues in his book The Lagoon based on the research of scholars like David Balme, Allan Gotthelf, and James Lennox, this is rather wrongheaded, given that Aristotle founded the science of biology!) One of the ways Aristotle is alleged to have gone astray is in his "doctrine of the four causes" (especially, we're told, the so-called "final cause" which involves impossibilities like backward causation); although I'm always suspicious when someone attributes a "doctrine" to Aristotle, it's worth looking carefully at what Aristotle actually said instead of relying on shallow or antagonistic summaries....

August 01, 2021 00:00

Meditations on Bach #5: Transcription

As I delve further into Bach's Cello Suites, I also stray further from anything like the received wisdom about how to play them (whether traditionally "classical" or "historically informed"). Take, for instance, the famous prelude from suite #1 in G major. Where I've settled for now is playing the first half at an extremely slow speed, with many sustained single and double notes (thus taking advantage of a capability of the electric bass). As I was practicing this piece at the instrument and then in my head the other day, I realized that I'm playing it at a speed that is appropriate for the breathing characteristic of meditation - half a bar inhaling, half a bar exhaling. That's slow, and perhaps the exact opposite of what Glenn Gould (one of my favorite pianists) did with Bach's keyboard works. However, after the half cadence in measure 22, I start to vary the pace somewhat, speeding up quite a bit in the bariolage of the crescendo before coming back to earth for the ending (which I've modified, too). Reading History, Imagination, and the Performance of Music by Peter Walls over the last few days has helped me to realize that, for better or worse, what I'm doing with the Cello Suites is not performance but transcription - more along the lines of, say, Ferruccio Busoni than Pablo Casals. (Indeed, this is not all that dissimilar from what I've done with my "eudaimonia suite" of books on living well: philosophy as the experience of wisdom, not as discursive argument.) I'll make a rough demo of the G major prelude soon to create a record of my current thinking, no matter how wayward it might be....

August 01, 2021 00:00

July 28, 2021


Development News July 2021

Development on the new Gajim version continued in July, bringing many fixes and improvements. Also this month: WebSocket improvements and a new python-nbxmpp release.

Changes in Gajim

Since version 1.2, Gajim offers WebSocket support via XEP-0156. A recent fix enables you to directly connect via WebSocket while creating an account.

Meanwhile, work on the next Gajim version made some progress:

  • Workspaces now support custom avatars
  • Improved usability of Gajim’s new Search View
  • Group chat creation is working again
  • Improvements for message scrolling and jump to bottom button
  • Fixes for Gajim’s new Notification Manager, which handles contact requests and group chat invitation
  • Fixes for direct messages (private messages/direct messages in group chats)
  • Fixes for initial migration from Gajim 1.3

What else happened

  • #10478: Fixed issue with Tests
  • #10598: Fixed issue when adding Note property to vCard
  • Updated API JID for group chat search
  • Fixed History Manager’s stand alone mode

Plugin updates

Gajim’s URL Image Preview plugin received a fix to prevent issues while trying to generate previews for zero-byte files or corrupted files.

Changes in python-nbxmpp

A bug has been fixed with RSM count requests (#120), which affected Gajim’s history synchronization.

This fix, and also many improvements concerning Ad-Hoc commands, are included in the new python-nbxmpp 2.0.3.

As always, feel free to join to discuss with us.


July 28, 2021 00:00

July 26, 2021


ejabberd 21.07

This new ejabberd 21.07 release includes many improvements and bugfixes in more than 130 commits.

ejabberd 21.07 released

When upgrading from previous versions, please notice: there is a suggested change in all SQL databases schemas, another suggested change specific for MySQL, an API change in srg_create, and major changes in ejabberdctl help.

Support for rebar3 has improved to allow building an OTP release. And support for Elixir has improved to use mix for compilation and build releases. So, you may want to use --with-rebar=rebar3 | mix and benefit from those tools features over rebar. Nevertheless, rebar is still used by default, and its support will be maintained for a long time as it’s used by several automatic build processes.

A more detailed explanation of those topics:

Add missing indexes to SQL sr_group table

The sr_group table in SQL databases got a new index.

If you store Shared Roster Groups in a SQL database, you can create the index corresponding to your database type, check the example SQL queries in the commit.

MySQL Backend Patch for scram-sha512

The MySQL database schema has improved to support scram-sha512 (#3582)

If you have a MySQL database, you can update your schema with:

ALTER TABLE users MODIFY serverkey varchar(128) NOT NULL DEFAULT ’’;
ALTER TABLE users MODIFY salt varchar(128) NOT NULL DEFAULT ’’;

Change in srg_create

That command API has changed the name argument to label, for coherence with the changes in the ejabberd Web Admin page.

Check ejabberdctl help srg_create or the API page.

Changes in ejabberdctl help

The help command has changed its usage and results.

If you use that command in some automatic script, please check ejabberdctl help and adapt your scripts accordingly.

Build a release using rebar3 and mix

It is now possible to build a release when using rebar, rebar3 and mix. Until now this was only supported when using rebar. In this sense, a recent rebar3 binary is included with ejabberd now.

For example usage, check the Production Release documentation section.

Are you curious about what a release is in the Erlang/OTP world? Check Adopting Erlang and Rebar3.

Build a development release

Now it is possible to build a development release when using rebar3 or mix. It allows running ejabberd in the local machine for development, manual testing… without installing in the system.

For example usage, check the Development Release documentation section.

By the way, Makefile has so many targets that now there’s a summary of them:

make help

Support to use mix in configure

The configure script supports rebar, rebar3 and Elixir‘s mix. Until now only rebar and rebar3 were supported.

If you want to compile ejabberd using mix, in theory any recent Elixir version is supported. But make rel requires Elixir 1.10.3 or higher.

Example usage:

./configure --with-rebar=mix
make rel
_build/prod/rel/ejabberd/bin/ejabberd start_iex

Summary of changes


  • Add rebar3 3.15.2 binary
  • Add support for mix to: ./configure --enable-rebar=mix
  • Add make dev to build a development release with rebar3 or mix
  • Improved make rel to work with rebar3 and mix
  • Hex: Add sql/ and vars.config to Hex package files (#3251)
  • Hex: Update mix applications list to fix error p1_utils is listed as both...
  • There are so many targets in Makefile… add make help
  • Fix failure in test suite with Python 3 (#3612)
  • Added experimental support for GitHub Codespaces
  • Switch test service from TravisCI to GitHub Actions (#3613)


  • Display extended error message in ejabberdctl (#3584)
  • Remove SMP option from ejabberdctl.cfg, -smp was removed in OTP 21 (#3560)
  • create_room: After creating room, store in DB if it’s persistent (#3632)
  • help: Major changes in its usage and output (#3569)
  • srg_create: Update to use label parameter instead of name (#3578)


  • ejabberd_listener: New send_timeout option
  • mod_mix: Improvements to update to 0.14.1 (#3634)
  • mod_muc_room: Don’t leak owner JIDs (#3615)
  • mod_multicast: Routing for more MUC packets
  • mod_multicast: Correctly strip only other bcc addresses (#3639)
  • mod_mqtt: Allow shared roster group placeholder in mqtt topic (#3566)
  • mod_pubsub: Several fixes when using PubSub with RSM (#3618)(#3621)
  • mod_push: Handle MUC/Sub events correctly (#3565)
  • mod_shared_roster: Delete cache after performing change to be sure that in cache will be up to date data (#3578)
  • mod_shared_roster: Improve database and caching
  • mod_shared_roster: Reconfigure cache when options change
  • mod_vcard: Fix invalid_encoding error when using extended plane characters in vcard
  • mod_vcard: Update econf:vcard() to generate correct vcard_temp record
  • Translations: Major improvements in the Indonesian and Portuguese translations
  • WebAdmin: New simple pages to view mnesia tables information and content
  • WebSocket: Fix typos (#3622)


  • MySQL Backend Patch for scram-sha512 (#3582)
  • SQLite: When exporting for SQLite, use its specific escape options (#2576)
  • SQLite: Minor fixes for new_sql_schema support (#3303)
  • mod_privacy: Cast as boolean when exporting privacy_list_data to PostgreSQL (#1773)
  • mod_mqtt: Add mqtt_pub table definition for MSSQL (#3097)
  • mod_shared_roster: Add missing indexes to sr_group tables in all SQL databases

ejabberd 21.07 download & feedback

As usual, the release is tagged in the Git source code repository on Github.

The source package and binary installers are available at ejabberd XMPP & MQTT server download page.

If you suspect that you’ve found a bug, please search or fill a bug report on Github.

The post ejabberd 21.07 first appeared on ProcessOne.

by Jérôme Sautret at July 26, 2021 09:53

July 13, 2021

Debian XMPP Team

XMPP Novelties in Debian 11 Bullseye

This is not only the Year of the Ox, but also the year of Debian 11, code-named bullseye. The release lies ahead, full freeze starts this week. A good opportunity to take a look at what is new in bullseye. In this post new programs and new software versions related to XMPP, also known as Jabber are presented. XMPP exists since 1999, and has a diverse and active developers community. It is a universal communication protocol, used for instant messaging, IoT, WebRTC, and social applications. You probably will encounter some oxen in this post.

  • biboumi, XMPP gateway to connect to IRC servers: 8.3 → 9.0
    The biggest change for users is SASL support: A new field in the Configure ad-hoc command lets you set a password that will be used to authenticate to the nick service, instead of using the cumbersome NickServ method.
    Many more changes are listed in the changelog.
  • Dino, modern XMPP client: 0.0.git20181129 → 0.2.0
    Dino in Debian 10 was practically a technology preview. In Debian 11 it is already a fully usable client, supporting OMEMO encryption, file upload, image preview, message correction and many more features in a clean and beautiful user interface.
  • ejabberd, the extensible realtime platform: 18.12.1 → 21.01.
    Probably the most important improvement for end-users is XEP-0215 support to facilitate modern WebRTC-style audio/video calls. ejabberd also integrates more nicely with systemd (e.g., the watchdog feature if supported, now). Apart from that, a new configuration validator was introduced, which brings a more flexible (but mostly backwards-compatible) syntax. Also, error reporting in case of misconfiguration should be way more helpful, now. As a new authentication backend, JSON Web Tokens (JWT) can be used. In addition to the XMPP and SIP support, ejabberd now includes a full-blown MQTT server. A large number of smaller features has been added, performance was improved in many ways, and several bugs were fixed. See the long list of changes.
  • Gajim, a GTK+-based Jabber client: 1.1.2 → 1.3.1
    The new Debian release brings many improvements. Gajim’s network code has been completely rewritten, which leads to faster connections, better recovery from network loss, and less network related hiccups. Customizing Gajim is now easier than ever. Thanks to the new settings backend and a completely reworked Preferences window, you can adapt Gajim to your needs in just a few seconds.
    Good for newcomers: account creation is now a lot easier with Gajim’s new assistant. The new Profile window gives you many options to tell people more about yourself. You can now easily crop your own profile picture before updating it.
    Group chats actions have been reorganized. It’s now easier to send invitations or change your nickname for example. Gajim also received support for chat markers, which enables you to see how far your contact followed the conversation. But this is by far not everything the new release brings. There are many new and helpful features, such as pasting images from your clipboard directly into the chat or playing voice messages directly from the chat window.
    Read more about the new Gajim release in Debian 11 here.
    Furthermore, three more Gajim plugins are now in Debian: gajim-lengthnotifier, gajim-openpgp for OX 🐂 (XEP-0373: OpenPGP for XMPP) and gajim-syntaxhighlight.
  • NEW Kaidan Simple and user-friendly Jabber/XMPP client 0.7.0
    Kaidan is a simple, user-friendly and modern XMPP chat client. The user interface makes use of Kirigami and QtQuick, while the back-end of Kaidan is entirely written in C++ using Qt and the Qt-based XMPP library QXmpp. Kaidan runs on mobile and desktop systems including Linux, Windows, macOS, Android, Plasma Mobile and Ubuntu Touch.
  • mcabber, small Jabber (XMPP) console client: 1.1.0 → 1.1.2
    A theme for 256 color terminals is now included, the handling of carbon message copies has been improved, and various minor issues have been fixed.
  • Poezio, Console-based XMPP client: 0.12.1 → 0.13.1
    This new release brings many improvements, such as Message Archive (XEP-0313) support, initial support for OMEMO (XEP-0384) through a plugin, HTTP File Upload support, Consitent Color Generation (XEP-0392), and plenty of internal changes and bug fixes. Not all changes in 0.13 and 0.13.1 can be listed, see the CHANGELOG for a more extensive summary.
  • Profanity, the console based XMPP client: 0.6.0 → 0.10.0
    We can not list all changes which have been done, but here are some highlights.
    Support of OMEMO Encryption (XEP-0384). Consistent Color Generation (XEP-0392), be aware of the changes in the command to standardize the names of commands. A clipboard feature has been added. Highlight unread messages with a different color in /wins. Keyboard switch to select the next window with unread messages with alt + a. Support for Last Message Correction (XEP-0308), Allow UTF-8 symbols as OMEMO/OTR/PGP indicator char. Add option to open avatars directly (XEP-0084). Add option to define a theme at startup and some changes to improve themes. Add possibility to easily open URLs. Add experimental OX 🐂 (XEP-0373, XEP-0374) support. Add OMEMO media sharing support, ...
    There is also a Profanity light package in Debian now, the best option for systems with tight limits on resources.
  • Prosody, the lightweight extensible XMPP server: 0.11.2 → 0.11.9
    Upgrading to the latest stable release of Prosody brings a whole load of improvements in the stability, usability and performance departments. It especially improves the performance of websockets, and PEP performance for users with many contacts. It includes interoperability improvements for a range of clients.
  • prosody-modules, community modules and extensions for Prosody: 0.0~hg20190203 → 0.0~hg20210130
    The ever-growing collection of goodies to plug into Prosody has a number of exciting additions, including a suite of modules to handle invite-based account registration, and others for moderating messages in group chats (e.g. for removal of spam/abuse), server-to-server federation over Tor and client authentication using certificates. Many existing community modules received updates as well.
  • Psi, Qt-based XMPP client: 1.3 → 1.5
    The new version contains important bug fixes.
  • salutatoi, multi-frontends, multi-purposes communication tool: 0.7.0a4 → 0.8.0~hg3453
    This version is now fully running on Python 3, and has full OMEMO support (one2one, groups and files). The CLI frontend (jp) has among new commands a "jp file get" one which is comparable to wget with OMEMO support. A file sharing component is included, with HTTP Upload and Jingle support. For a list of other improvements, please consult the changelog.
    Note, that the upstream project has been renamed to "Libervia".
  • NEW sms4you, Personal gateway connecting SMS to XMPP or email 0.0.7
    It runs with a GSM device over ModemManager and uses a lightweight XMPP server or a single email account to handle communication in both directions.
  • NEW xmppc, XMPP Command Line Client 0.1.0
    xmppc is a new command line tool for XMPP. It supports some basic features of XMPP (request your roster, bookmarks, OMEMO Devices and fingerprints). You can send messages with both legacy PGP (XEP-0027) and the new OX 🐂 (XEP-0373: OpenPGP for XMPP).

That's all for now. Enjoy Debian 11 bullseye and Happy Chatting!

by Debian XMPP Team at July 13, 2021 00:00

July 07, 2021

Ignite Realtime Blog

inVerse Plugin for Openfire reaches!

The Ignite Realtime community is happy to announce the immediate availability of a an update to the inVerse plugin for Openfire, which makes the Converse.js web client available to your users.

This release brings the bugfixes released in Converse v7.0.5 and v7.0.6.

Your Openfire instance should automatically display the availability of the update. Alternatively, you can download the new release of the plugin at the inVerse plugin’s archive page. If you’ve got feedback or ideas about this plugin, come and join the conversation on Discourse!

For other release announcements and news follow us on Twitter

1 post - 1 participant

Read full topic

by danc at July 07, 2021 11:31

Erlang Solutions

FinTech Matters newsletter | July 2021

fintech newsletter

With most people’s summer plans thoroughly disrupted for another year, there is not much sign of winding down for holidays in the FinTech space just yet with mega investment rounds and big deals making the news. If you haven’t already subscribed for regular updates on tech innovation in the industry – click the button below. 

Michael Jaiyeola, FinTech Marketing Lead

[Subscribe now]

The Top Stories Right Now

Goldman Sachs starts trading on JPMorgan’s Onyx blockchain platform

Goldman Sachs Group has joined JPMorgan’s Onyx trading platform that is based on the Ethereum blockchain. They will use the platform to execute smart contracts in the repo market, a digital version of the dollar repurchase agreement. The use of blockchain technology will solve the pain points around conducting large-scale transitions in the repo market.

Read more

The UK and Nordics lead Europe’s Open Banking 

A new report by Mastercard has revealed the UK and Nordic countries as being best placed to take advantage of open banking thanks to their advanced digital infrastructures, high-speed broadband accessibility and smartphone usage.

Open banking is the technology and regulation that has democratised banking data, putting consumers more in control of how they use their financial information and opening the door for innovative fintech firms to offer exciting alternative products. By early 2021, more than 3 million UK consumers and businesses used open banking products. And, most recently, we have seen global payments giant Visa acquire the Swedish open banking provider Tink in a €1.8bn deal.

Read more

Klarna raises $639M at a $45.6B valuation thanks to US growth

Klarna, the European BNPL provider, has raised fresh funding as it heads towards IPO amid “massive growth” in the US market.  Klarna is now established as the highest-valued private FinTech in Europe. The increase in US users has been dramatic, growing from 10 million towards the end of 2020 to 18 million. Overall, Klarna operates in 20 countries and has more than 90 million global active users making more than 2 million transactions a day conducted on its platform. 

Read more

What Else We’re Reading

5 Interesting Use Cases of Erlang and Elixir in Financial Services

Lessons FinTech Can Learn from Telecoms

We’re joining FinTech Week London on July 12 to co-present a panel discussion livestream from Barclays Rise – ‘What’s Next for Blockchain in Financial Services’. You can register for free here.

RabbitMQ Summit is happening 13-14 July – RabbitMQ is the Erlang based message broker used by many of the major global financial services firms including Goldman Sachs,  Credit Suisse and JPMorgan

Trifork – our parent company has undergone a very successful IPO bringing many more new shareholders and investors to the group which will help power our continued success in the development of technologies that make people’s lives better.

Erlang Solutions byte size

To make sure you don’t miss out on any of our leading FinTech content, events and news, do subscribe for regular updates. We will only send you relevant high-quality content and you can unsubscribe at any time.

Connect with me on LinkedIn


The post FinTech Matters newsletter | July 2021 appeared first on Erlang Solutions.

by Michael Jaiyeola at July 07, 2021 11:11