Planet Jabber

June 30, 2022


Project Stateless File Sharing: First Steps

Hey, this is my first development update! As some of you might already know from my last blog post, my Google Summer of Code project is implementing Stateless File Sharing for Dino. This is my first XMPP project and as such, I had to learn very basic things about it. In my blog posts I’ll try to document the things I learned, with the idea that it might help someone else in the future. I won’t refrain from explaining terms you might take for granted in the XMPP world.

The idea behind Stateless File Sharing

Currently there are mutiple ways to send files via XMPP. Some of those methods initiate the file transfers in very different ways. This makes it difficult to add shiny new features like blurred previews of images, because we would need to implement that for each file transfer individually.

What we want is a unified initiation for file transfers. In that initiation, “sources” should be specified that tell the receiver how they can access that file.

Relevant XEPs

The core of the XMPP protocol is very slim, defining only general ways of communicating data. It is build to be extensible, and XEP are exactly that: XMPP Extension Protocols.

Stateless File Sharing is XEP-0447. It depends on XEP-0446, which defines the metadata that should be sent alongside a file. XEP-0446 in turn depends on XEP-0300, where the integration of hashes is specified, and XEP-0264, which defines the usage of thumbnails.


This is a term that comes up everywhere if you dive into XMPP technical information. Since it confused be for a while, here a quick rundown.

Stanzas are the basic form of communication between XMPP clients and servers. There are different types of them, but they are all encoded with XML. As such, they inherit XML’s structure.

An XML element can be viewed as a tree. See for instance the format example for the file metadata element (XEP-0446):

<file xmlns='urn:xmpp:file:metadata:0'>
    <hash xmlns='urn:xmpp:hashes:2'

The root element is called ‘file’, and has only one attribute “xmlns”. Each attribute has a value assigned, in this case its ‘urn:xmpp:file:metadata:0’. The ‘file’ element also has child elements, all containing a text body. Only the “hash” child element has an additional attribute.

Progress (as of 29/06/2022)

I’m now familiar with how Dino represents Stanzas. I’ve created the base struct for the file metadata element (XEP-0446) and serialize, send, and deserialize it. So far I simply integrate the code into the http file transfer code, detaching from it will come later.

You can track my progress on my stateless-file-sharing branch!

June 30, 2022 00:00

June 29, 2022


Newsletter: Command UI and Better Transcriptions Coming Soon

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

This month the team has been hard at work on a new major feature for Cheogram Android: the command UI.  This feature will get rid of the need to configure your account with a clunky chat bot on mobile, replacing it with a fit-for-purpose native UI that can be viewed under the contact.  And because we are implementing it using only open standards, the UI will also work for other command-using entities out there.  The feature is not quite ready for first release, but if you want to come test a pre-release just drop by the chatroom (see below for how to get to the chatroom).

Almost since the beginning of JMP one of the favourite features has been our voicemail.  Reading your voicemails instead of having to “dial in” somewhere and listen to them is a real advantage for many users.  However, the transcription is far from perfect, sometimes being slow and completely missing support for any language other than English.  We are now testing an alternative engine with anyone who is interested, this new engine gets you the transcription faster and supports dozens of languages.  Come by the chatroom if you want to help test this out before we roll it out as a full replacement.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at June 29, 2022 18:00

Sam Whited


α CMa
Honda CB1100
Naked bike
1140cc air-cooled inline four
Metzeler Roadtec Z8 Interact Tires 110/80-18; 140/70-18

With gas prices as high as they are I recently decided to sell my Honda S2000, Vela. Though I normally say that there is never a reason to buy a new vehicle when a used one can be had that’s just as good, depreciates less, and is cheaper, I’ve decided to break my own rule and ordered a new truck (more on that in a later post). However, even though I placed my order in October of 2021 I have yet to hear anything from the manufacturer. I was getting tired of the almost 2 hour each way commute by bicycle-bus-train-bicycle, so I decided to get a temporary vehicle until my truck comes. I wanted something that was cheap, had good gas mileage, and that was easy to repair so that I wouldn’t have to pay a mechanic to work on it if I ran into trouble, so I decided to go with a motorcycle. I ended up finding a Honda CB1100 with ~76k miles at a reasonable price and went for it!

3.5mm audio jack 3.5mm close up

Since I named each of my cars after constellations, I decided to name my motorcycle after an individual star: Sirius.

Star map of Canis Major

Star Map of Canis Major

Star map by Torsten Bronger and Kxx, CC-BY-SA

New Tires

The first thing I did is put some new rubber on it, jacking it up was interesting, to say the least.

A motorcycle supported by its center stand and some ratchet straps looped over the ceiling joists and under the fork

I went with the Metzeler Roadtec Z8 Interact Tires after several recommendations from local shops. They were the same as the worn ones which were already on the bike and were one of the few tires I could find in the relatively odd combination of 110/80-18 and 140/70-18. I put ~40 miles/day on the bike commuting to work, so some fresh tires were important.

No change (Metzeler Roadtec Z8 Interact Tires 110/80-18; 140/70-18)

I’m not planning on building the bike out in any particular way (nor do I have the money), so I’m not sure that this will turn into a longer build blog, but I hope to get lots of high-gas-mileage and fun miles out of Sirius. As always, I’ll update this post with any modifications I make, or possibly with short trip reports that aren’t worth a full post on their own. In particular, I’m looking forward to a ride in the North East Georgia mountains sometime very soon!

Until then, ride on!

June 29, 2022 11:22

Ignite Realtime Blog

Smack 4.4.6 released

We are happy to announce the release of Smack 4.4.6. For a high-level overview of what’s changed in Smack 4.4.6, check out Smack’s changelog

This release mostly consists of bug fixes, many of them reported by the Jitsi folks. I would like to thank especially Damian Minkov for detailed problem descriptions, for the fruitful collaboration and for various joint bug hunts which turned out to be very successful. :slight_smile:

As always, all Smack releases are available via Maven Central.

We would like to use this occasion to point at that Smack now ships with a NOTICE file. Please note that this adds some requirements when using Smack as per the Apache License 2.0. The content of Smack’s NOTICE file can conveniently be retrieved using Smack.getNoticeStream().

2 posts - 2 participants

Read full topic

by Flow at June 29, 2022 08:22

June 27, 2022

Erlang Solutions

Gaining a Competitive Advantage in Fintech From Your Choice of Tech Stack

In our recent white paper ‘Technology Trends in Financial Services 2022’, we explained the importance of software engineering for gaining a competitive advantage in the industry. Since the start of the year, a lot has occurred on a macro level strengthening our belief that modern financial services must be based on a solid technical foundation to deliver the user experiences and business reliability needed for commercial success.

We see the role of the underlying technology (away from the customer-facing products) as being critical in enabling success in fintech in two main ways:

  • Building customer trust – by guaranteeing operational resilience and optimal availability of fintech systems
  • Exceptional user experience – rapid development of features facilitated by a tech stack that just works

These tenets, if you like, are core to the Erlang ecosystem, including the Elixir programming language and the powerful BEAM VM (or virtual machine). In this follow-up article, we will dive deeper into how your choice of tech stack impacts your business outcomes and why Erlang and Elixir will often be the right tool for the job in fintech. We also share some guiding principles that underpin our expert engineering team’s approach to projects in the financial services space.

What are the desirable characteristics of a fintech system?

Let’s first look at some of the non-negotiable must-haves of a tech stack if you are involved in a fintech development project.

A seamless customer experience

Many projects fail in the fintech space due to focusing only on an application’s user interface. This short-sighted view doesn’t consider the knock-on effects of predicted (user growth) or unpredicted changes (like the pandemic lockdowns). For instance, your slick customer interface loses a lot of its shine when it’s connected to a legacy backend that is sluggish in responding to requests. 

Key point: When it comes to modern financial services, customers expect real-time, seamless and intelligent services and not clunky experiences. To ensure you deliver this, you need predictable behaviour under heavy loads and during usage spikes, resilience and fault-tolerance without the associated costs sky-rocketing.

Technology that enables business agility

Financial services is a fast-moving industry. To make the most of emerging opportunities, whether as an incumbent or fintech-led startup, you need to be agile from a business perspective, which is bound to tech agility. With the adoption of open-source technology on the rise in FS, we’re starting to see the benefits of moving away from proprietary tech, the risk of vendor lock-in, and the obstacles that can create. When you can dedicate more resources to shipping code without being constrained and forced into unfavourable trade-offs, you’re better positioned to cash in on opportunities before your competitors.

Key point: You want a system that is easy to maintain and understand; this will help with onboarding new developers to the team and focusing resources where they can make the most impact.

Tech stacks that use fewer resources

Designing for sustainability is now a key consideration for any business, especially in an industry under the microscope like financial services. The software industry is responsible for a high level of carbon usage, and shareholders and investors are now weighing this up when making investment decisions. As funding in the tech space tightens, this is something that business leaders need to be aware of as a part of their tech decision-making strategy.

CTOs and architects can help by making better choices in technologies. For instance, using a BEAM-based language can reduce the physical infrastructure requiring just one-tenth of the servers. That leads to significant cost reductions and considerable savings in terms of carbon footprint.  

Key point: Minimising costs is important, but sustainability too is now part of the consideration process.

System Reliability and availability

A robust operational resiliency strategy strengthens your business case in financial services. We’ve learnt from the stress placed on systems caused by spikes in online commerce since the pandemic is that using technologies that are proven and built to deal with the unpredictability of the modern world is critical.

One thing sure to damage any FS player, regardless of size, is high-profile system outages and downtime. This can cause severe reputation damage and attract hefty fines from regulators.

According to Gartner, the average cost of IT downtime is around $5,600 per minute. That is about $300,000 per hour on average. So avoiding downtime in your fintech production system is mission-critical.

Key point: Your fintech system must be fault-tolerant, able to handle spikes in traffic and always be available and scalable.

How Erlang/Elixir is meeting these challenges

Erlang, Elixir and the BEAM VM overview

Erlang is a programming language designed to build massively scalable, soft real-time systems that require high availability. Elixir is a programming language that runs on the BEAM VM – the same virtual machine as Erlang – and can be adopted throughout the tech stack. Many of the world’s largest banking, e-commerce and fintech companies depend on these technologies to power their tech stacks, such as Klarna, SumUp and SolarisBank.

In telecoms, where phone systems, by law, had to work, and penalties were huge if they failed. Vendors designed in resilience from day one and not as an afterthought. Erlang and Elixir, as programming languages originally designed for the telecoms, are the right tools for the job for many fintech use cases where fault-tolerance and reliability are also essential.

Of course, many languages are being used successfully across financial services but, the big differentiator with Erlang and Elixir is that high availability, fault-tolerance and scalability are built-in, out of the box. This makes developers’ lives easier and allows them the freedom to deliver innovative features to end-users. For fast-moving verticals such as fintech, the robust libraries and reduced lines of code compared to C, C++ or Java mean that you are set up for rapid prototyping and getting to market before the opposition.

Key attributes of Erlang/Elixir for fintech development:

  • Can handle a huge number of concurrent activities
  • Ideal for when actions must be performed at a certain point in time or within a specific time (soft real-time)
  • Benefits of system distribution
  • Ideal for massive software systems
  • Software maintenance without stopping the system
  • Built-in fault tolerance and reliability

If you’re in the early stages of your fintech project, you may not need these capabilities right away, but trying to retrofit them later will cost you valuable time and resources. Our expert team has helped many teams adopt BEAM-based technology at various times in their business lifecycle, talk to us about how we can help you.  

Let’s start a discussion about how we can help with your fintech project >

Now let’s look at how Erlang/Elixir and the BEAM VM deliver against the desirable fintech characteristics outlined in the previous section.

System availability and resilience during unpredicted events

Functional Programming helps developers to write reliable software. Using the BEAM VM means you can have reliability of up to ‘nine-nines’ (99.9999999%) – that’s almost zero downtime for your system, obviously very desirable in any fintech system.

This comes from Erlang/Elixir systems having ‘no single point of failure’ that risks bringing down your entire system. The ‘actor model’ (where parallel processes communicate with each other via messages) crucially does not have shared memory, so errors that inevitably will occur are localised and will not impact the rest of your system.

Fault tolerance is another crucial aspect of Erlang/Elixir systems, making them a good option for your fintech project. ‘Supervisors’ are programmed with instructions on how to restart parts of a system when things do fail. This involves going back to a known initial state that is guaranteed to work:

The end result is that using Erlang/Elixir means that your system will achieve unrivalled availability with far less effort and resources than other programming languages.

System scalability for when demand grows

Along with unmatched system uptime and availability, Erlang and Elixir offer scalability that makes your system able to handle changes in demand and sudden spikes in traffic. This is possible without many of the difficulties of trying to scale with other programming languages.

With Erlang/Elixir, your code allows thousands of ‘processes’ to run concurrently on the same machine – in other words, you are making the most of each machine’s resources (vertical scaling). 

These processes are distributed, meaning they can communicate with processes on other machines within the network enabling developers to coordinate work across multiple nodes (horizontal scaling).

In the fintech startup space especially, having confidence that if you achieve dramatic levels of fast growth, your tech system will stand up to demand and not require a costly rewrite of the codebase can be a critical factor in maintaining momentum.

Concurrency model for high volume transactional systems

Concurrent Programming makes it appear like multiple sequences of commands are being executed in parallel. Erlang and Elixir are ideal for workloads that involve a considerable amount of concurrency, such as with transaction-intensive segments of financial services like payments and trading.  

The functional nature of Erlang/Elixir, plus the lightweight nature of how the BEAM executes processes, makes writing concurrent programs far more straightforward than with other languages.

If your fintech project expects to need to process massive amounts of transactional data from different sources, then Erlang/Elixir could be the most frictionless way for your team to go about building it.

Developer friendly 

There are many reasons why developers enjoy working with Erlang/Elixir – in fact, Elixir has just been voted the second most loved programming language in the 2022 Stack Overflow Developer Survey.

OTP middleware (the secret sauce behind Erlang/Elixir) abstracts the technical difficulty of concurrency and handling system failures. It allows your tech team the space to focus on business logic instead of time-consuming computational plumbing and facilitates fast prototyping and development of new features. 

Speed to market is a crucial differentiator in the competitive fintech landscape, with Erlang/Elixir you can release new features in time to attract new customers and retain existing ones better than with many other languages.

Because using Erlang/Elixir for your project means less code and a lightweight execution model of processes demanding fewer CPU resources, you will need fewer servers, reducing your energy consumption and infrastructure costs. An independent study made at Heriot-Watt University found that an application written in Erlang compared to one in C++ found that 4-20 times less code was required for the Erlang codebase.

During the beginning of the pandemic in 2020, used Elixir to launch the world’s first WhatsApp-based COVID-19 response in just 5 days. The service was designed, deployed, stress-tested, and launched. It scaled to 450K unique users on the first day and has since grown to serve over 7.5 million people.

Key takeaways about using Erlang and Elixir in fintech

We can summarise that what is needed for success in fintech is a real-time, secure, reliable and scalable system that is easy to maintain and cost-efficient. Furthermore, you need a stack that lets your developers ship code and release new products and features quickly. The Erlang Ecosystem (Erlang, Elixir and the BEAM VM) sets a solid foundation for your fintech startup or project to be successful. 

With a reliable, easy-to-maintain code base, your most valuable resource (your tech talent) will be freed up to concentrate on delivering value and competitive advantage that delivers to your bottom line.

With the right financial products to market and an Erlang/Elixir backend, you can be confident in delivering smooth and fast end-user experiences that are always available and seamless. This is crucial if you are looking to digitally onboard new customers, operate payment services, handle vast numbers of transactions or build trust in emerging areas such as cryptocurrency or blockchain.

The benefits of using Erlang/Elixir for your fintech project

2x FASTER DEVELOPMENT – of new services thanks to the language’s design, the OTP middleware, set of frameworks, principles, and design patterns that guide and support the structure, design, implementation, and deployment 

10x BETTER RELIABILITY –  services that are down less than 5 minutes per year thanks to built-in fault tolerance and resilience mechanisms built into the language and the OTP middleware, e.g. software upgrades and generic error handling, as seen by e.g. Ericsson in their mobile network packet routers.

10x MORE SECURE – solutions that are hard to hack and crash through denial of service attacks thanks to the language construction with lack of pointers, use of messages instead of shared memory and immutable state rather than variable assignments, as seen by the reduced number of vulnerabilities compared to other languages.

10x MORE USERS – handling of the potential millions of transactions per second within milliseconds thanks to a battle-tested VM.

10x LESS COSTS AND ENERGY CONSUMPTION – thanks to fewer servers needed and the fact that the BEAM is open-source. Swapping from Ruby to Elixir Bleacher report managed to reduce their hardware requirements from 150 to just 8.

We will be an events partner with Fintech Week London again this year, hosting a special panel and networking evening at CodeNode in Central London on 12 July from 6 pm. Register your interest in attending via this link here.

The post Gaining a Competitive Advantage in Fintech From Your Choice of Tech Stack appeared first on Erlang Solutions.

by Michael Jaiyeola at June 27, 2022 16:46

The XMPP Standards Foundation

On-Boarding Experience with XSF (Converse)

Hi, I am PawBud. I will be working as a GSoC Contributor with XSF. To know more about my project kindly read this blog. Feel free to contact me through my email to ask me anything you want!

Before I start, I feel that some things that I am going to write in this blog might offend someone. Kindly note that these thoughts are my own. Once again you are free to contact me through my email if you have anything to say, I would be happy to hear your thoughts.

On-Boarding Experience with XSF & Community Bonding Period

Well, I must say, this is unlike any technical internship that you will experience. The level of attention and support that you will get from the organization admins and your project mentors from day 1 is genuinely surprising. I mean, I started having weekly calls with my mentor before the coding period even began, and I am still quite thankful that my mentor(s) spent their time to guide me. Believe it or not, as soon as the results were announced and I saw my name, my first reaction was “oops, well this is bad”. I wasn’t ready for it! Meeting JC (my primary mentor) definitely helped me a lot during the community bonding period. Turns out not everyone does feel the same (thanks to Patiga, I knew about this). Still, from my past experience in internships, I do believe that self-doubt is something that our present-day society profits from, and it is something that one can’t afford if one wants to improve.

I wanted to spend as much time with the source code as I could but my university exams did not allow me to do so. I managed to work on some minor issues though, but I still feel that I need to put in as much work as I can. As of writing this blog, it’s the second week of GSoC. I have to work on designing and implementing a UI for the jingle call modal and will hopefully be done with that and the tests, by the end of this week.

Why Choose GSoC & specifically XSF?

Ok, this is an interesting one, I still remember the special treatment I received from the whole community once the GSoC results were announced. I think Eddie instantly shared the results with the community through the XSF GSoC group chat, Twitter and a couple of other XSF social media pages. I genuinely feel pampered by the overwhelming support of the community, but I am so proud that I chose GSoC over a usual company internship.

I took a course on software engineering during my sophomore year, and until now I have never ever seen/experienced its concepts being implemented. In Converse, I learned more about the applications of test-driven software development in 2 weeks than I ever did during my whole semester. Turns out, that practising a technique is actually the best way to learn it. The fact that my mentor takes out his time which he could very easily spend on developing Converse, just to guide me through the smallest of obstructions, is what really makes this whole thing special. No manager or supervisor in any company is going to give you this kind of attention! Period.

There is this book called Zucked by Roger McNamee. For those of you who haven’t read it here is a gist:- Roger was one of the initial investors of Facebook and a mentor to Mark Zuckerberg. Throughout the book, he explains how Facebook turned into an evil giant, and how the guilt he feels that he could not stop Zuckerberg and Facebook from pivoting to its present-day state. My hate for social media giants like Instagram, Facebook and Twitter, started after I graduated from Secondary School, this was because I started watching a lot of technology-related youtube channels and understood how toxic these social media giants actually are and how they exploit human psychology by giving instant gratification, and blah blah blah. I must say, I was infuriated and disgusted that all this time, I was just a product of these social media giants.

That mentality has grown stronger over the years and yes, I am not on any social media except Reddit, which I occasionally use to text my friends. Hence, XSF was a no-brainer for me, plus I wanted to learn Javascript well, so I only submitted one proposal, which was for Converse.

This was a personal and a non-technical blog post. To be honest, I am still figuring out how to write blogs, as this is my first blog post. I think I will start writing more technical and project-related blog posts after this.


In conclusion, I would like to thank my Mentors (JC & Vanitas), Eddie (XSF GSoC Admin), Patiga (Co-contributor for XSF) and the whole XSF community for putting their trust in me. I would also like to take this opportunity to thank Zerefwayne & Yash Rathore for guiding me in the journey leading up to me getting selected as a contributor for XSF. I look forward to learning from all of you henceforth the GSoC period is finished.

For those of you who could not make it to GSoC with any organization including XSF

Remember, open source is not GSoC. I would honestly still contribute to various organizations, had I not gotten selected to GSoC, because it’s fun! Imagine writing code that is used by potentially millions of users daily. All those late-night hotfixes and mistakes that you do, do become good memories and of course, you get to learn a lot. The open source community is so friendly, that I have yet to see organizations that have core contributors who do not support newcomers.

GSoC should be a side goal in my opinion. If you get selected, that’s good for you. If you don’t get selected, you haven’t lost anything, you simply gain knowledge and contributions.

June 27, 2022 00:00

June 23, 2022

Ignite Realtime Blog

REST API Openfire plugin 1.8.1 released!

Earlier today, version 1.8.1 of the Openfire REST API plugin was released. This version removes the need to authenticate for status endpoints, adds new endpoints for bulk modifications of affiliations on MUC rooms, as well as a healthy number of other bugfixes.

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page .

For other release announcements and news follow us on Twitter

1 post - 1 participant

Read full topic

by danc at June 23, 2022 13:09

June 22, 2022

Erlang Solutions

Contract Programming an Elixir approach – Part 1

This series explores the concepts found in Contract Programming and adapts them to the Elixir language. Erlang and BEAM languages, in general, are surrounded by philosophies like “fail fast”, “defensive programming”, and “offensive programming”, and contract programming can be a nice addition. The series is also available on Github.

You will find a lot of unconventional uses of Elixir. There are probably things you would not try in production, however, through the series, we will share some well-established Elixir libraries that already use contracts very well.

Programming by contract?

It is an approach to program verification that relies on the successful execution of statements; not that different from what we do with ExUnit when testing:

defmodule Program do
  def sum_all(numbers), do: Enum.sum(numbers)

ExUnit.start(autorun: false)

defmodule ProgramTest do
  use ExUnit.Case

  test "Result is the sum of all numbers" do
    assert Program.sum_all([-10, -5, 0, 5, 10]) == 0

  test "Should be able to process ranges" do
    assert Program.sum_all(0..10) == 55

  test "Passed in parameter should only be a list or range" do
    assert_raise Protocol.UndefinedError,
                 ~s(protocol Enumerable not implemented for "1 2 3" of type BitString),
                 fn -> Program.sum_all("1 2 3") end

  test "All parameters must be of numeric value" do
    assert_raise ArithmeticError, ~s(bad argument in arithmetic expression), fn ->
      Program.sum_all([["1", "2", "3"]])

Finished in 0.00 seconds (0.00s async, 0.00s sync)
4 tests, 0 failures

In the example above, we’re taking Program.sum_all/1 and verifying its behavior by giving it inputs and matching them with the outputs. In a sense, our function becomes a component that we can only inspect from the outside. Contract programming differs in that our assertions get embedded inside the components of our system. Let’s try to use the assert keyword within the program:

defmodule VerifiedProgram do
  use ExUnit.Case

  def sum_all(numbers) do
    assert is_list(numbers) || is_struct(numbers, Range),
           "Passed in parameter must be a list or range"

    result =
      Enum.reduce(numbers, 0, fn number, accumulator ->
        assert is_number(number), "Element #{inspect(number)} is not a number"
        accumulator + number

    assert is_number(result), "Result didn't return a number got #{inspect(result)}"

Our solution became a bit more verbose, but hopefully, we’re now able to extract the error points through evaluation:

VerifiedProgram.sum_all("1 2 3")
** (ExUnit.AssertionError) 

Passed in parameter must be a list or range
VerifiedProgram.sum_all(["1", "2", "3"])
** (ExUnit.AssertionError) 

Element "1" is not a number

This style of verification shifts the focus. Instead of just checking input/output, we’re now explicitly limiting the function reach. When something unexpected happens, we stop the program entirely to try to give a reasonable error.

This is how the concept of “contracts” works in a very basic sense.

How to run tests in contract programming

Having contracts in our codebase doesn’t mean that we can stop testing. We should still write them and maybe even reduce the scope of our checks:

defmodule VerifiedProgramTest do
  use ExUnit.Case

  test "Result is the sum of all numbers" do
    assert VerifiedProgram.sum_all(0..10) == 55
    assert VerifiedProgram.sum_all([-10, -5, 0, 5, 10]) == 0
    assert VerifiedProgram.sum_all([1.11, 2.22, 3.33]) == 6.66

Finished in 0.00 seconds (0.00s async, 0.00s sync)
1 test, 0 failures
By  using our functions in runtime or test-time we can re-align the expectations of our system components if requirements change:
# Now we expect this to work
VerifiedProgram.sum_all("1 2 3 4")
** (ExUnit.AssertionError) 

Passed in parameter must be a list or range

We also need to make the changes required for it to happen. In this case, we need to expand our domain to also include stringified numbers, separated by a space.

Should we add use ExUnit everywhere then?

As seen in the examples above, there’s nothing stopping us from trying the assert keyword. It is a creative way to verify our system components. However, I feel that the failures are designed in such a way as to be used in a test environment, not necessarily at runtime.

From the docs: “In general, a developer will want to use the general assert macro in tests. This macro introspects your code and provides good reporting whenever there is a failure.“Thankfully for us, in Elixir, we have a more primitive mechanism in which we can assert data in an effective way: pattern matching. I would like to explore this more in-depth in the second installment of this contract series.

Main takeaways

  • Contract programming is a technique for program verification that can be applied in Elixir.
  • Similar to testing, we’re not limited to only verifying at test time.
  • We embedded assertions within our code to check for failures.
  • Although not endorsed, we may take advantage of ExUnit to do contracts in Elixir.
  • Other mechanisms native to Erlang and Elixir may be used to achieve similar results.

More info on contracts

About the author

While on a desk Raúl spends time as an elixir programmer, still distilling some thoughts on the topic of “contract programming”; otherwise he’s a recent dad, enjoys simulation games and trying out traditional food. He operates from Tijuana, México from where he was born and lives.

The post Contract Programming an Elixir approach – Part 1 appeared first on Erlang Solutions.

by Raul Chouza at June 22, 2022 10:32

June 21, 2022


Announcing ejabberd DEB and RPM Repositories

Today, we are happy to announce our official Linux packages repository: a source of .deb and .rpm packages for ejabberd Community Server. This repository provides a new way for the community to install and upgrade ejabberd.

All details on how to set this up are described on the dedicated website:

ejabberd installation log

What packages are available?

Currently, the latest DEB and RPM packages are available for the open-source ejabberd version (eCS 22.05). New versions will be published as they’re released to

Which platforms are currently supported?

The DEB and RPM packages currently target popular amd64 and arm64 systems. So if you are using Debian or centOS you should be fine. If your platform is not in this list, you can open an issue on Github describing which platform you would like to be added.

We aim to expand the support to other widely adopted architectures and platforms in the future.

What is next?

In addition to providing packages for ejabberd for upcoming releases, as well as expanding distribution and architecture support, we will make improvements to this official repository. Away from prying eyes, we’ve been studying the possibility to make ejabberd’s installers (.deb, .rpm, .run) directly on Github below each Release and that’s now the case! You can check them here now.

We’re looking forward to the community feedback on these packages to provide the best possible experience. Feel free to get in touch if needed!

The post Announcing ejabberd DEB and RPM Repositories first appeared on ProcessOne.

by Adrien at June 21, 2022 13:28

ejabberd 22.05

A new ejabberd release is finally here! ejabberd 22.05 includes five months of work, 200 commits, including many improvements (MQTT, MUC, PubSub, …) and bug fixes.

ejabberd 22.05 released
– Improved MQTT, MUC, and ConverseJS integration
– New installers and container
– Support Erlang/OTP 25

When upgrading from the previous version please notice: there are minor changes in SQL schemas, the included rebar and rebar3 binaries require Erlang/OTP 22 or higher, and make rel uses different paths. There are no breaking changes in configuration, and only one change in commands API.

A more detailed explanation of those topics and other features:

New Indexes in SQL for MUC

Two new indexes were added to optimize MUC. Those indexes can be added in the database before upgrading to 22.05, that will not affect older versions.

To update an existing database, depending on the schema used to create it:

  • MySQL (mysql.sql or
CREATE INDEX i_muc_room_host_created_at ON muc_room(host(75), created_at);
CREATE INDEX i_muc_room_subscribers_jid USING BTREE ON muc_room_subscribers(jid);
  • PostgreSQL (pg.sql or
CREATE INDEX i_muc_room_host_created_at ON muc_room USING btree (host, created_at);
CREATE INDEX i_muc_room_subscribers_jid ON muc_room_subscribers USING btree (jid);
  • SQLite (lite.sql or
CREATE INDEX i_muc_room_host_created_at ON muc_room (host, created_at);
CREATE INDEX i_muc_room_subscribers_jid ON muc_room_subscribers(jid);
  • MS SQL (mssql.sql):
CREATE INDEX [muc_room_host_created_at] ON [muc_registered] (host, nick)
CREATE INDEX [muc_room_subscribers_jid] ON [muc_room_subscribers] (jid);

Fixes in PostgreSQL New Schema

If you moved your PostgreSQL database from old to new schema using mod_admin_update_sql or the update_sql API command, be aware that those methods forgot to perform some updates.

To fix an existing PostgreSQL database schema, apply those changes manually:

ALTER TABLE archive DROP CONSTRAINT i_archive_sh_peer;
ALTER TABLE archive DROP CONSTRAINT i_archive_sh_bare_peer;
CREATE INDEX i_archive_sh_username_peer ON archive USING btree (server_host, username, peer);
CREATE INDEX i_archive_sh_username_bare_peer ON archive USING btree (server_host, username, bare_peer);

DROP TABLE carboncopy;

ALTER TABLE push_session DROP CONSTRAINT i_push_session_susn;
CREATE UNIQUE INDEX i_push_session_susn ON push_session USING btree (server_host, username, service, node);

ALTER TABLE mix_pam DROP CONSTRAINT i_mix_pam_us;
CREATE UNIQUE INDEX i_mix_pam ON mix_pam (username, server_host, channel, service);
CREATE INDEX i_mix_pam_us ON mix_pam (username, server_host);

CREATE UNIQUE INDEX i_route ON route USING btree (domain, server_host, node, pid);

ALTER TABLE mqtt_pub DROP CONSTRAINT i_mqtt_topic;
CREATE UNIQUE INDEX i_mqtt_topic_server ON mqtt_pub (topic, server_host);

API Changes

The oauth_revoke_token API command has changed its returned result. Check oauth_revoke_token documentation.

API Batch Alternatives

If you use the command delete_old_messages periodically and noticed it can bring your system to an undesirable state with high CPU and memory consumption…

Now you can use delete_old_messages_batch, which performs the operation in batches, by setting the number of messages to delete per batch and the desired rate of messages to delete per minute.

Two companion commands are added: delete_old_messages_status to check the status of the batch operation, and abort_delete_old_messages to abort the batch process.

There are also new equivalent commands to delete old MAM messages.

Erlang/OTP and Elixir

From now, Erlang/OTP 25 is supported. As that’s a brand new version, for stable deployments you may prefer to use 24.3 or other lower version.

Notice that ejabberd can be compiled with Erlang as old as 19.3, but the rebar and rebar3 binaries included with ejabberd 22.05 require at least Erlang 22. This means that, to compile ejabberd 22.05 with those tools using an Erlang version between 19.3 and 21.3, you should get yourself a compatible rebar/rebar3 binary. If your operating system doesn’t provide a suitable one, you can download the old ones: rebar from ejabberd 21.12 and rebar3 from ejabberd 21.12.

Regarding Elixir supported versions:

  • Elixir 1.4 or higher is supported for compilation, but:
  • Elixir 1.10 is required to build OTP releases (make rel and make dev)
  • Elixir 1.11 is required to run make relive
  • Elixir lower than 1.11.4 requires Erlang lower than 24 to build OTP releases


mod_conversejs was introduced in ejabberd 21.12 to serve a simple page for the Converse.js XMPP web browser client.

Several improvements in mod_conversejs now allow a simpler configuration, and more customization at the same time:

  • The options now support the @HOST@ keyword
  • The options now support auto, which uses local or remote Converse files
  • The Converse’s auth and register options are set based on ejabberd’s configuration
  • default_domain option now has @HOST@ as default value, not the first defined vhost
  • conversejs_options: New option to setup additional options for Converse
  • conversejs_resources: New option to serve converse.js files (no need to setup an additional web server)

For example, if you downloaded Converse, now you can setup WebSocket, mod_conversejs, and serve Converse without additional web server, in an encrypted port, as simple as:

    port: 443
    module: ejabberd_http
    tls: true
      /websocket: ejabberd_http_ws
      /conversejs: mod_conversejs

    conversejs_resources: "/home/ejabberd/conversejs-9.0.0/package/dist"

With that configuration, Converse is available in https://localhost/conversejs

More details in the mod_conversejs documentation.

New Installers

For many years, the release of a new ejabberd source code package was accompanied with binary installers, built using InstallBuilder and CEAN, and available in the ProcessOne Downloads page.

Since this ejabberd 22.05, there are new installers that use a completely different build method:

  • they are built using the tools provided in PR 3781
  • they use the most recent stable dependencies
  • they are available for linux/amd64 and linux/arm64 architectures
  • they are built automatically using the Installers Workflow
  • for stable releases, they are available for download in the ejabberd GitHub Releases
  • they are built also for every commit in master branch, and available for download in the results of Installers Workflow
  • if the installer is ran by root, it installs in /opt/ejabberd* and setups systemd service
  • if ran by a regular user, it asks installation path

However, compared to the old installers, those new installers:

  • do not ask for domain: now you must edit ejabberd.yml and set the hosts option
  • do not register the first Jabber account and grant admin rights: you must do it yourself

Please give those new installers a try, and comment any problem, improvement or ideas.

New Container Image

In addition to the ejabberd/ecs Docker container image published in Docker Hub, there is a new container image published in ejabberd GitHub Packages.

Its usage is similar to the ejabberd/ecs image, with some benefits and changes worth noting:

  • it’s available for linux/amd64 and linux/arm64 architectures
  • it’s built also for master branch, in addition to the stable ejabberd releases
  • it includes less customizations to the base ejabberd compared to ejabberd/ecs
  • it stores data in /opt/ejabberd/ instead of /home/ejabberd/

See its documentation in CONTAINER.

If you used previous images from that GitHub Packages registry please note: until now they were identical to the ones in Docker Hub, but the new 22.05 image is slightly different: it stores data in /opt/ejabberd/ instead of /home/ejabberd/. You can update the paths to the container volumes in this new image, or switch to Docker Hub to continue using the old same images.

Source Code Package

Until now, the source code package available in the ProcessOne Downloads page was prepared manually together with the binary installers. Now all this is automated in GitHub, and the new source code package is simply the same one available in GitHub Tags.

The differences are:

  • instead of tgz it’s now named tar.gz
  • it contains the .gitignore file
  • it lacks the configure and aclocal.m4 files

The compilation instructions are slightly improved and moved to a separate file:

New make relive

This new make relive is similar to ejabberdctl live, but without requiring to install or build an OTP release: compile and start ejabberd immediately!

Quickly put:

  • Prepare it with: ./ && ./configure --with-rebar=./rebar3 && make
  • Or use this if you installed Elixir: ./ && ./configure --with-rebar=mix && make
  • Start without installing (it recompiles when necessary): make relive
  • It stores config, database and logs in _build/relive/
  • There you can find the well-known script: _build/relive/ejabberdctl
  • In that erlang shell, recompile source code and reload at runtime: ejabberd_admin:update().

Please note, when make relive uses Elixir’s Mix instead of Rebar3, it requires Elixir 1.11.0 or higher.

New GitHub Workflows

As you may notice while reading these release notes, there are new github workflows to build and publish the new installers and the container images, in addition to the Common Tests suite.

The last added workflow is Runtime. The Runtime workflow ensures that ejabberd compiles with Erlang/OTP 19.3 up to 25, using rebar, rebar3 and several Elixir versions. It also checks an OTP release can be built, started, register an account, and stop ejabberd.

See its source code runtime.yml and its results.

If you have troubles compiling ejabberd, check if those results reproduce your problem, and also see the steps used to compile and start ejabberd using Ubuntu.

Translations Updates

The German, Portuguese, Portuguese (Brazil), Spanish and Catalan translations are updated and completed. The French translation was greatly improved and updated too.

Documentation Improvements

Some sections in the ejabberd Documentation are improved:



  • C2S: Don’t expect that socket will be available in c2s_terminated hook
  • Event handling process hook tracing
  • Guard against erlang:system_info(logical_processors) not always returning a number
  • domain_balancing: Allow for specifying type only, without specifying component_number


  • Add TLS certificate authentication for MQTT connections
  • Fix login when generating client id, keep connection record (#3593)
  • Pass property name as expected in mqtt_codec (fixes login using MQTT 5)
  • Support MQTT subscriptions spread over the cluster (#3750)


  • Attach meta field with real jid to mucsub subscription events
  • Handle user removal
  • Stop empty MUC rooms 30 seconds after creation
  • default_room_options: Update options configurable
  • subscribe_room_many_max_users: New option in mod_muc_admin


  • Improved options to support @HOST@ and auto values
  • Set auth and register options based on ejabberd configuration
  • conversejs_options: New option
  • conversejs_resources: New option


  • mod_pubsub: Allow for limiting item_expire value
  • mod_pubsub: Unsubscribe JID on whitelist removal
  • node_pep: Add config-node and multi-items features (#3714)


  • Improve compatibility with various db engine versions
  • Sync old-to-new schema script with reality (#3790)
  • Slight improvement in MSSQL testing support, but not yet complete

Other Modules

  • auth_jwt: Checking if an user is active in SM for a JWT authenticated user (#3795)
  • mod_configure: Implement Get List of Registered/Online Users from XEP-0133
  • mod_host_meta: New module to serve host-meta files, see XEP-0156
  • mod_mam: Store all mucsub notifications not only message notifications
  • mod_ping: Delete ping timer if resource is gone after the ping has been sent
  • mod_ping: Don’t send ping if resource is gone
  • mod_push: Fix notifications for pending sessions (XEP-0198)
  • mod_push: Keep push session ID on session resume
  • mod_shared_roster: Adjust special group cache size
  • mod_shared_roster: Normalize JID on unset_presence (#3752)
  • mod_stun_disco: Fix parsing of IPv6 listeners


  • autoconf: Supported from 2.59 to the new 2.71
  • fast_tls: Update to 1.1.14 to support OpenSSL 3
  • jiffy: Update to 1.1.1 to support Erlang/OTP 25.0-rc1
  • luerl: Update to 1.0.0, now available in
  • lager: This dependency is used only when Erlang is older than 22
  • rebar2: Updated binary to work from Erlang/OTP 22 to 25
  • rebar3: Updated binary to work from Erlang/OTP 22 to 25
  • make update: Fix when used with rebar 3.18


  • mix release: Copy include/ files for ejabberd, deps and otp, in mix.exs
  • rebar3 release: Fix ERTS path in ejabberdctl
  • Set default ejabberd version number when not using git
  • mix.exs: Move some dependencies as optional
  • mix.exs: No need to use Distillery, Elixir has built-in support for OTP releases (#3788)
  • tools/make-binaries: New script for building Linux binaries
  • tools/make-installers: New script for building command line installers


  • New make relive similar to ejabberdctl live without installing
  • ejabberdctl: Fix some warnings detected by ShellCheck
  • ejabberdctl: Mention in the help: etop, ping and started/stopped
  • make rel: Switch to paths: conf/, database/, logs/
  • mix.exs: Add -boot and -boot_var in ejabberdctl instead of adding vm.args
  • tools/ Fix some warnings detected by ShellCheck


  • Accept more types of ejabberdctl commands arguments as JSON-encoded
  • delete_old_mam_messages_batch: New command with rate limit
  • delete_old_messages_batch: New command with rate limit
  • get_room_occupants_number: Don’t request the whole MUC room state (#3684, #1964)
  • get_vcard: Add support for MUC room vCard
  • oauth_revoke_token: Add support to work with all backends
  • room_unused_*: Optimize commands in SQL by reusing created_at
  • rooms_unused_...: Let get_all_rooms handle global argument (#3726)
  • stop|restart: Terminate ejabberd_sm before everything else to ensure sessions closing (#3641)
  • subscribe_room_many: New command


  • Updated Catalan
  • Updated French
  • Updated German
  • Updated Portuguese
  • Updated Portuguese (Brazil)
  • Updated Spanish


  • CI: Publish CT logs and Cover on failure to an external GH Pages repo
  • CI: Test shell scripts using ShellCheck (#3738)
  • Container: New workflow to build and publish containers
  • Installers: Add job to create draft release
  • Installers: New workflow to build binary packages
  • Runtime: New workflow to test compilation, rel, starting and ejabberdctl

Full Changelog

All changes between 21.12 and 22.05

ejabberd 22.05 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are now available in GitHub Release / Tags. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

The Docker image is in Docker Hub, and a new Container image at GitHub Packages.

If you suspect that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 22.05 first appeared on ProcessOne.

by Jérôme Sautret at June 21, 2022 13:27


Gajim 1.4.5

Gajim 1.4.5 brings an important fix for ad-hoc commands, as well as some improvements for message styling and further bug fixes.

What’s New

This version fixes a bug to make ad-hoc commands available again. But there are some general improvements as well: A new shortcut for toggling the chat list has been added. Now you can hide the chat list by pressing Ctrl+R. Furthermore, completion for emoji shortcodes can now be disabled in Gajim’s preferences.

Fixes and improvements

Several issues have been fixed in this release.

  • Use nickname provided in subscription requests
  • Group chat outcasts: Make removing users work
  • Chat: Display corrections for /me messages correctly

Have a look at the changelog for the complete list.


As always, don’t hesitate to contact us at or open an issue on our Gitlab.

June 21, 2022 00:00


Stateless File Sharing GSoC project


I’m Patiga, a computer science student from Germany and a new contributor to Dino. The yearly Google Summer of Code has started, and I’m glad to be part of it. This time, you can look forward to a modernized file transfer called “Stateless File Sharing”.

End users can look forward to blurred previews of large images and other metadata alongside file transfers. You might already know this from other messengers, now it’s coming to Dino/XMPP.

There are some more technicalities, which we will dive into deeper in upcoming blog posts. If you want to have a peek at some technical information already, see the relevant XEP-0447.

June 21, 2022 00:00

June 20, 2022

Prosodical Thoughts

Modernizing XMPP authentication and authorization

We’re excited to announce that we have received funding, from the EU’s NGI Assure via the NLnet Foundation, to work on some important enhancements to Prosody and XMPP. Our work will be focusing on XMPP authentication and authorization, and bringing it up to date with current and emerging best practices.

What kind of changes are we talking about? Well, there are a few aspects we are planning to work on. Let’s start with “authentication” - that is, how you prove to the server that you are who you claim to be. We’ll skim the surface of some of the technologies used, but this post won’t descend too deep for most people to follow along.


Traditionally, authentication is accomplished by providing a password to your client app. XMPP uses a standard authentication protocol known as the “Simple Authentication and Security Layer” (SASL), which in turn is a framework of different authentication methods it calls “mechanisms”. Most XMPP services use the SCRAM family of mechanisms. In fact, it’s mandatory for modern XMPP software to support SCRAM.

These SCRAM mechanisms are quite clever: they allow both the server and the client to store a hash instead of the password, allow the server to verify the client knows the password, and allow the client to verify that the server knows the password (and isn’t just faking success - such as an attacker might try to do if they managed to compromise the server’s TLS certificate and wanted to intercept your traffic).

Yet, as far as we have come with password authentication, there are some real-world problems with passwords that we need to recognize. Passwords have proven, time and time again, to be a weak point in account security. From users choosing weak passwords, reusing them across multiple services, or accidentally exposing them to fake services (phishing attacks), there are multiple ways that unauthorized parties can gain access to password-based services.

Multi-factor authentication

To try and plug the holes in this leaky boat, many online services have adopted multi-factor authentication (“MFA”). This extra layer of security generally requires you to provide proof that you also possess an additional secret securely generated by the service. This is achieved using hardware tokens, mobile apps and often simply sending a numeric code via SMS. Using this extra step ensures accounts can still be protected even if passwords are guessed or obtained by attackers.

Most XMPP services and software do not currently support multi-factor authentication. If you’re a security-aware individual, that’s not a major problem in itself: you can achieve practically equivalent security by using a strong unique password and only using it to access your XMPP account. But as a service provider, you know that’s not going to be the case across all your users. As XMPP continues to gain adoption with non-technical users through new projects such as Snikket, we need to provide the safest environment we can for everyone.

Although we have had some hacky solutions available for multi-factor authentication with Prosody for a long time, there has been no standard approach implemented in clients and servers. The most recent and promising standard is XEP-0388: Extensible SASL Profile, which defines a way for the server to ask the client to perform more steps (such as prompting the user to provide a second factor) after authentication.

There are no known open-source implementations of XEP-0388 currently, but we plan to add support for it in Prosody as part of this project. Once this is in place, clients will be able to start introducing support for it too.

One of the challenges for multi-factor authentication in XMPP is that you don’t necessarily want to enter an authentication code every time your app connects to your account. With most people using XMPP on mobile networks these days, it’s common for your XMPP app to re-authenticate to the server multiple times per day due to network changes. You don’t really want to miss messages because the app was waiting for you to enter an auth code!

On websites, you generally provide a password once, when you initially log in. If successfully verified, the website then stores a cookie in your browser. This cookie is very similar to a temporary, unique and session-specific password, which is used to identify you from then on (which is why your account password isn’t required on every page request).

XMPP doesn’t have anything like cookies, so when a verified device reconnects, it will just use the password again. The server (if multi-factor is enforced) will inconveniently require the user to provide a second factor again too. There are some proposed to solutions, such as the CLIENT-KEY SASL mechanism by Dave Cridland. TLS client certificates are also supported in XMPP, and would provide a solution to this issue too. Usage of CLIENT-KEY in XMPP is described in XEP-0399, and time-based MFA authentication codes (TOTP) in XEP-0400, however neither are available in current XMPP clients and servers.

During this project, we plan to expand and implement these XEPs in Prosody, to make multi-factor authentication practical and user-friendly.


Once Prosody has securely proven that you are the account owner, that’s often the end of the story - today. However, with the mechanisms that we just discussed that allow us to securely identify individual clients, we can start to do more interesting things. For example, Prosody will be able to show you exactly what clients are currently authorized on your account. If a device gets lost or stolen, it becomes possible to selectively revoke that device’s authorization.

As well as revoking access, we’ll be able to assign different permission levels for each of your sessions even if the device isn’t compromised. For example, maybe you want to connect from a client on an untrusted machine - but you don’t want it to have access to read past messages from your archive. That’s something we will be able to arrange.

Combining the ability to revoke sessions and the ability to specify per-session permissions leads us to another new possibility: granting others limited access to your account.

For example, Movim is a popular social web XMPP client. Anyone with an XMPP account can log in to a Movim instance, and use it to chat, follow news, and discover communities. One problem is that Movim needs to log in to your account, so it needs your credentials. That’s no so bad if you are self-hosting Movim, or you are using an instance managed by your XMPP provider. However, many people don’t have that option, and rely on third-party hosted Movim instances to sign in.

You might also want to connect other special-purpose clients to your account, for account backup and migration, bots, or apps that integrate with XMPP for synchronization and collaboration.

Using our new authorization capabilities, one of our big goals is to allow you to log in to such third-party apps and utilities without ever sharing your password with them. And when you are finished, you can easily revoke their access to your account without needing to reset and change your password across all your other clients.

Flexible permissions framework

Internally, we’ll support these new authorization possibilities through an overhaul of Prosody’s permission handling. In 0.12 and earlier, the only permission check supported in most of Prosody is: “is this user an admin?”. We are adding support for arbitrary roles, and allowing you to fully customize the permissions associated with each role. Users and even individual sessions can be assigned roles.

That means someone who generally has admin access to Prosody may choose not to grant that level of access to all their clients. Or they might choose to enable their admin powers only when they need them, spending most of their time as a normal user.

These changes alone will unlock many new possibilities for operators and developers. Expect the first pieces of this work to land in Prosody trunk nightly builds very soon, as it forms the basis of all the rest of the features discussed in this post!

Further updates about this project will be posted on this blog. The project homepage is over at

by The Prosody Team at June 20, 2022 09:15

June 19, 2022

Paul Schaub

Reproducible Builds – Telling of a Debugging Story

Reproducibility is an important tool to empower users. Why would a user care about that? Let me elaborate.

For a piece of software to be reproducible means that everyone with access to the software’s source code is able to build the binary form of it (e.g. the executable that gets distributed). What’s the matter? Isn’t that true for any project with accessible source code? Not at all. Reproducibility means that the resulting binary EXACTLY matches what gets distributed. Each and every bit and byte of the binary is exactly the same, no matter on which machine the software gets built.

The benefit of this is that on top of being able to verify that the source code doesn’t contain any spyware or unwanted functionality, the user now is also able to verify that the distributable they got e.g. from an app store has no malicious code added into it. If for example the Google Play Store would inject spyware into an application submitted by a developer, the resulting binary that gets distributed via the store would be different from the binary built by the user.

Why is this important? Well, Google already requires developers to submit their signing keys, so they can modify software releases after the fact. Now, reproducibility becomes a tool to verify that Google did not tamper with the binaries.

I try to make PGPainless build reproducible as well. A few months ago I added some lines to the build script which were supposed to make the project reproducible by using static file modification dates, as well as a deterministic file order in the JAR archive.

    // Reproducible Builds
    tasks.withType(AbstractArchiveTask) {
        preserveFileTimestamps = false
        reproducibleFileOrder = true

It took a bit more tinkering back then to get it to work though, as I was using a Properties file written to disk during build time to access the libraries version during runtime, and it turns out that the default Writer for Properties files includes the current time and date in a comment line. This messed up reproducibility, as now that file would be different each time the project got built. I eventually managed to fix that though by writing the file myself using a custom Writer. When I tested my build script back then, both my laptop and my desktop PC were able to build the same exact JAR archive. I thought I was done with reproducibility.

Today I drafted another release for PGPainless. I noticed that my table of reprodubile build hashes for each release was missing the checksums for some recent releases. I quickly checked out those releases, computed the checksums and updated the table. Then I randomly chose the 1.2.2 release and decided to check if the checksum published to maven central still matches my local checksum. And to my surprise it didn’t! Was this a malicious act from Maven Central?

Release 1.2.2 was created while I was on my Journey through Europe, so I had used my Laptop to draft the release. So the first thing I did was grab the laptop, checkout the releases source in git and build the binaries. Et voila, I got checksums matching those on Maven Central. So it wasn’t an attack, but for some reason my laptop was producing different binaries than my main machine.

I transferred the “faulty” binaries over to my main machine to compare them in more detail. First I tried Meld, which is a nice graphical diff tool for text files. It completely froze though, as apparently it is not so great for comparing binary files.

Next I decompressed both binaries and compared the resulting folders. Meld did not report any differences, the directories matched perfectly. What the heck?

Next I tried diff 1.jar 2.jar which very helpfully displayed the message “Binary files 1.jar and 2.jar are different”. Thanks for nothing. After some more research, I found out that you could use the flag --text to make diff spit out more details. However, the output was not really helpful either, as the binary files were producing lots of broken output in the command line.

I did some research and found that there were special diff tools for JAR files. Checking out one project called jardiff looked promising initially, but eventually it reported that the files were identical. Hm…

Then I opened both files in ghex to inspect their byte code in hexadecimal. By chance I spotted some differences near the end of the files.

The same spots in the other JAR file look identical, but the A4 got replaced with B4 in the other file. Strange. I managed to find another command which found and displayed all places in the JAR files which had mismatches:

$ cmp -l 1.jar 2.jar | gawk '{printf "%08X %02X %02X\n", $1, strtonum(0$2), strtonum(0$3)}'
00057B80 ED FD
00057BB2 ED FD
00057BEF A4 B4
00057C3C ED FD
00057C83 A4 B4
00057CDE A4 B4
00057D3F A4 B4
00057DA1 A4 B4

Weird, in many places ED got changed to DF and A4 got changed into B4 in what looked like some sort of index near the end of the JAR file. At this point I was sure that my answers would unfortunately lay within the ZIP standard. Why ZIP? For what I understand, JAR files are mostly ZIP files. Change the file ending from .jar to .zip and any standard ZIP tool will be able to extract your JAR file. There are probably nuances, but if there are, they don’t matter for the sake of this post.

The first thing I did was to check which versions of zip were running on both of my machines. To my surprise they matched and since I wasn’t even sure if JAR files would be generated using the standard zip tool, this was a dead end for me. Searching the internet some more eventually lead me to this site describing the file structure for PKZIP files. I originally wasn’t sure if PKZIP was what I was looking for, but I had seen the characters PK when investigating the hex code before, so I gave the site a try.

Somewhere I read The signature of the local file header. This is always '\x50\x4b\x03\x04'. Aha! I just had to search for the octets 50 4B 03 04 in my file! It should be in approximation to the bytes in question so I just had to read backwards until I found them. Aaaand: 50 4B 01 02 Damn. This wasn’t it. But 01 02 looks so suspiciously non-random, maybe I oversaw something? Let’s continue to read on the website. Aha! The Central directory file header. This is always "\x50\x4b\x01\x02". The section even described its format in a nice table. Now I just had to manually count the octets to determine what exactly differed between the two JAR files.

It turns out that the octets 00 00 A4 81 I had observed to change in the file were labeled as “external file attributes; host-system dependent”. Again, not very self-explanatory but something I could eventually throw into a search engine.

Some post on StackOverflow suggested that this had to do with file permissions. Apparently ZIP files (and by extension also JAR files) would use the external attributes field to store read and write permissions of the files inside the archive. Now the question turned into: “How can I set those to static values?”.

After another hour of researching the internet with permutations of the search terms jar, archive, file permissions, gradle, external attributes, zip I finally stumbled across a bug report in another software project that talked about the same exact issue I had; differing jar files on different machines. In their case, their CI would build the jar file in a docker container and set different file permissions than the locally built file, hence a differing JAR archive.

In the end I found a bug report on the gradle issue tracker, which exactly described my issue and even presented a solution: dirMode and fileMode could be used to statically set permissions for files and directories in the JAR archive. One of the comments in that issue reads:

We should update the documentation.

The bug report was from 3 years ago…

Yes, this would have spared me from 3h of debugging 😉
But I probably would also not have gone onto this little dive into the JAR/ZIP format, so in the end I’m not mad.

Finally, my build script now contains the following lines to ensure reproducibility, no matter the file permissions:

    // Reproducible Builds
    tasks.withType(AbstractArchiveTask) {
        preserveFileTimestamps = false
        reproducibleFileOrder = true

        dirMode = 0755
        fileMode = 0644

Finally my laptop and my desktop machine produce the same binaries again. Hopefully this post can help others to fix their reproducibility issue.

Happy Hacking!

by vanitasvitae at June 19, 2022 22:08

June 18, 2022


Gajim 1.4.4

Gajim 1.4.4 comes with many improvements: emoji auto-complete, automatic theme switching when your desktop switches from light to dark in the evening, a completely reworked Gajim remote interface, and many bug fixes.

What’s New

After many emoji improvements in Gajim 1.4.3, this version comes with an emoji auto-complete while writing messages! As soon as you start typing a :, a popover will show you available emoji shortcodes, just like on Slack or Github 🎉

Emoji auto-complete

Emoji auto-complete

This works for chat commands as well. Starting a message with / will show you available commands, like /me for example.

Gajim’s behavior when closing or minimizing the main window can now easily be configured, and it should work better on various systems.

Did you know you can integrate Gajim in custom scripts for automation? Gajim offers a DBus interface, which has been completely rewritten in this release.

Fixes and improvements

Several issues have been fixed in this release.

  • Preferences: Action on Close Button setting has been added
  • Preferences: Show in Taskbar setting has been added
  • Preferences: D-Bus Interface setting has been added
  • gajim-remote has been rewritten
  • History: Keep History ‘Until Gajim is Closed’ option has been added
  • Group chats: Gajim now asks for confirmation before leaving

Have a look at the changelog for the complete list.


As always, don’t hesitate to contact us at or open an issue on our Gitlab.

June 18, 2022 00:00

June 09, 2022

Prosodical Thoughts

Prosody 0.12.1 released

We are pleased to announce a new minor release from our stable branch.

While the 0.12.0 release has been a huge success, inevitably people found some aspects that didn’t work quite as intended, or weren’t as polished as they ought to be. With the appreciation for the help from everyone reporting issues to us, we’re happy to now release our best version yet - 0.12.1 is here!

Notably, we made a couple of changes that improve compatibility with Jitsi Meet, we fixed some bugs in our newly-extended XEP-0227 support, invites, and DNS handling. We also improved compatibility with some less common platforms.

A summary of changes in this release:

Fixes and improvements

  • mod_http (and dependent modules): Make CORS opt-in by default (#1731)
  • mod_http: Reintroduce support for disabling or limiting CORS (#1730)
  • net.unbound: Disable use of hosts file by default (fixes #1737)
  • MUC: Allow kicking users with the same affiliation as the kicker (fixes #1724 and improves Jitsi Meet compatibility)
  • mod_tombstones: Add caching to improve performance on busy servers (fixes #1728: mod_tombstone: inefficient I/O with internal storage)

Minor changes

  • prosodyctl check config: Report paths of loaded configuration files (#1729)
  • prosodyctl about: Report version of lua-readline
  • prosodyctl: check config: Skip bare JID components in orphan check
  • prosodyctl: check turn: Fail with error if our own address is supplied for the ping test
  • prosodyctl: check turn: warn about external port mismatches behind NAT
  • mod_turn_external: Update status and friendlier handling of missing secret option (#1727)
  • prosodyctl: Pass server when listing (outdated) plugins (fix #1738: prosodyctl list --outdated does not handle multiple versions of a module)
  • util.prosodyctl: check turn: ensure a result is always returned from a check (thanks eTaurus)
  • util.prosodyctl: check turn: Report lack of TURN services as a problem #1749
  • util.random: Ensure that native random number generator works before using it, falling back to /dev/urandom (#1734)
  • mod_storage_xep0227: Fix mapping of nodes without explicit configuration
  • mod_admin_shell: Fix error in ‘module:info()’ when statistics is not enabled (#1754)
  • mod_admin_socket: Compat for luasocket prior to unix datagram support
  • mod_admin_socket: Improve error reporting when socket can’t be created (#1719)
  • mod_cron: Record last time a task runs to ensure correct intervals (#1751)
  • core.moduleapi, core.modulemanager: Fix internal flag affecting logging in in some global modules, like mod_http (#1736, #1748)
  • core.certmanager: Expand debug messages about cert lookups in index
  • configmanager: Clearer errors when providing unexpected values after VirtualHost (#1735)
  • mod_storage_xep0227: Support basic listing of PEP nodes in absence of pubsub#admin data
  • mod_storage_xep0227: Handle missing {pubsub#owner}pubsub element (fixes #1740: mod_storage_xep0227 tracebacks reading non-existent PEP store)
  • mod_storage_xep0227: Fix conversion of SCRAM into internal format (#1741)
  • mod_external_services: Move error message to correct place (fix #1725: mod_external_services: Misplaced textual error message)
  • mod_smacks: Fix handling of unhandled stanzas on disconnect (#1759)
  • mod_smacks: Fix counting of handled stanzas
  • mod_smacks: Fix bounce of stanzas directed to full JID on unclean disconnect
  • mod_pubsub: Don’t attempt to use server actor as publisher (#1723)
  • mod_s2s: Improve robustness of outgoing s2s certificate verification
  • mod_invites_adhoc: Fall back to generic allow_user_invites for role-less users
  • mod_invites_register: Push invitee contact entry to inviter
  • util.startup: Show error for unrecognized command-line arguments passed to ‘prosody’ (#1722)
  • util.jsonpointer: Add tests, compat improvements and minor fixes
  • util.jsonschema: Lua version compat improvements


As usual, download instructions for many platforms can be found on our download page

If you have any questions, comments or other issues with this release, let us know!

by The Prosody Team at June 09, 2022 12:27

June 08, 2022

Erlang Solutions

MongooseIM 5.1 Configuration Rework

MongooseIM is a modern messaging server that is designed for scalability and high performance. The use of XMPP (Extensible Messaging and Presence Protocol) extensions (XEPs) means it is also highly customisable. Since version 4.0 it has been using the TOML configuration file format, which is much more user-friendly than the previously used Erlang terms. The latest release, MongooseIM 5.1, makes it more developer-friendly as well by reworking how the configuration is processed and stored, hence making it easier to understand and extend. Other significant changes in this release include improvements of the Inbox feature, support for the latest Erlang/OTP 25 and numerous internal improvements, like getting rid of the dynamically compiled modules. For all changes, see the Release Notes.

Until version 3.7, MongooseIM was configured with a file called mongooseim.cfg, that contained Erlang terms. It was a legacy format that was the primary obstacle encountered by new users. The configuration terms were interpreted and converted to the internal format. This process was difficult because there was no configuration schema – all conversion logic was scattered around the system. Some of the conversion and validation was done when parsing the configuration terms, and some was done afterwards, sometimes deeply inside the logic handling specific use cases. This meant that many errors were only reported when the corresponding features were used instead of when the system started, leading to unwanted runtime errors. As a result, the whole configuration subsystem needed to be rewritten. The new configuration file is now called mongooseim.toml and it uses the TOML format, which is simple, intuitive and easy to learn.

Because of the significant code base size and the tight coupling of system logic with configuration processing, it was very difficult to write a new configuration subsystem from scratch in one release cycle. What is more, simultaneous support for both formats was needed during the transition period. This led us to dividing the work into three main steps:

  1. Addition of new config parsing, validating and processing logic, which used the old internal format as the target (version 4.0).
  2. Dropping the old Erlang configuration format (version 4.1).
  3. Reworking the internal format to be consistent with the TOML configuration file (version 5.1).

Each of the steps was further divided into multiple small changesets, keeping all CI tests passing the entire time. We also periodically load-tested the code to monitor the performance.

What did we do?

Here is one of the most complicated sections from the default mongooseim.cfg file taken from the old MongooseIM 3.7.1:

{8089, ejabberd_cowboy, [
  {num_acceptors, 10},
  {transport_options, [{max_connections, 1024}]},
  {protocol_options, [{compress, true}]},
  {ssl, [{certfile, "priv/ssl/fake_cert.pem"}, {keyfile, "priv/ssl/fake_key.pem"}, {password, ""}]},
  {modules, [
    {"_", "/api/sse", lasse_handler, [mongoose_client_api_sse]},
    {"_", "/api/messages/[:with]", mongoose_client_api_messages, []},
    {"_", "/api/contacts/[:jid]", mongoose_client_api_contacts, []},
    {"_", "/api/rooms/[:id]", mongoose_client_api_rooms, []},
    {"_", "/api/rooms/[:id]/config", mongoose_client_api_rooms_config, []},
    {"_", "/api/rooms/:id/users/[:user]", mongoose_client_api_rooms_users, []},
    {"_", "/api/rooms/[:id]/messages", mongoose_client_api_rooms_messages, []},
    %% Swagger
    {"_", "/api-docs", cowboy_swagger_redirect_handler, {priv_file, cowboy_swagger, "swagger/index.html"}},
    {"_", "/api-docs/swagger.json", cowboy_swagger_json_handler, #{}},
    {"_", "/api-docs/[...]", cowboy_static, {priv_dir, cowboy_swagger, "swagger", [{mimetypes, cow_mimetypes, all}]}}

This section used to define one of the listeners that accept incoming connections –  this one accepted HTTP connections, because it used the ejabberd_cowboy module. It had multiple handlers defined in the {modules, [...]} tuple. Together they formed the client-facing REST API, which could be used to send messages without the need for XMPP connections. The whole part was difficult to understand, and most of the terms should not be modified, because the provided values were the only ones that worked correctly. To customise it, one would need to know the internal details of the implementation. However, the logic was scattered around multiple Erlang modules, making it difficult to figure out what the resulting internal structures were. For version 5.0, we used TOML, but the configuration was still quite complex because it reflected the complicated internals of the resulting Erlang terms – actually this was one of the very few parts of the file needing further rework. By cleaning up the internal format in 5.1, we have managed to make the default configuration simple and intuitive:

  port = 8089
  transport.num_acceptors = 10
  transport.max_connections = 1024
  protocol.compress = true
  tls.verify_mode = "none"
  tls.certfile = "priv/ssl/fake_cert.pem"
  tls.keyfile = "priv/ssl/fake_key.pem"
  tls.password = ""

    host = "_"
    path = "/api"

By just looking at the top line, one can see that the section is about listening for HTTP connections. The [[...]] TOML syntax denotes an element in an array of tables, which means that there could be other HTTP listeners. Below are the listener options. The first one, port, is just an integer. The remaining options are grouped into subsections (TOML tables). One could use the typical section syntax there, e.g.

  num_acceptors = 10
  max_connections = 1024

Alternatively, an inline table could be used:

transport = {num_acceptors = 10, max_connections = 1024}

However, for such simple subsections the dotted keys used in the example seem to be the best choice. The TLS options make it obvious that fake certificates are used, and they should be replaced with real ones. The verify_node option is set to none, which means that client certificates are not checked.

The biggest improvement in this configuration section was made in the API definition itself – now it is clear that [[listen.http.handlers.mongoose_client_api]] defines a handler for the client-facing API. The double brackets remind that there can be multiple handlers, if, for example, you would like to have another API hosted at the same port. The question is: what has happened to all these specific handlers that were present in the old configuration format? The answer is simple: they are all enabled by default, and if you want, you can control which ones are enabled. The options configurable for the user are limited to the ones that actually do work, limiting unnecessary frustration and easing up the learning curve. See the documentation for more details about this section. By the way, the docs have improved a lot as well.

How did we do it?

One of the challenges of supporting the old internal format was that the new TOML configuration format should be defined once and from then changed as little as possible, to avoid forcing users to constantly change their configuration with each software version. This was achieved with a customisable config processor that takes configuration specification (called config spec for short) and the parsed TOML tree (the tomerl parser was used here) as arguments. Each node is processed recursively with the corresponding config spec node and the current path (a list of keys from the root to the leaf). The config spec consists of three types of config nodes, specified with Erlang records corresponding to the following TOML types:

TOML typeParsed node(Erlang type)Config specification recordDescription
TableMap#section{items = Items}Configuration section, which is a key-value map. For each key a config spec record is specified.
ArrayList#list{items = ConfigSpec}List of values sharing the same config spec.
#option{type = string}
#option{type = binary}
#option{type = integer}
#option{type = int_or_infinity}
#option{type = float}
#option{type = boolean}
Value of a primitive type. The type specifies how the parsed nodes are converted to the target types.

The root node is a section. TOML processing for each config node is done in up to 5 steps (depending on the node type). Each step is configured by the fields of the corresponding node specification record, allowing you to customise how the value is processed.

StepNode typesRecord fieldsDescription
Check required keys, validate keys, recursively process each item, and merge the resulting values with defaults. The result is a key-value list unless a custom processing function was used for the values.
ParselistitemsRecursively process each item. The result is a list.
ParseoptiontypeCheck the type and convert the value to the specified type.
ValidateallvalidateCheck the value with the specified validator.
Format itemssection, listformat_itemsFormat contents as a list (default for lists) or a map (default for sections).
ProcessallprocessApply a custom processing function to the value.
WrapallwrapWrap the node in a top-level config item, a key-value pair, or inject the contents (as a list) to the items of the parent node.

The flexibility of these options enabled processing of any TOML structure into arbitrary Erlang terms that was needed to support the legacy internal configuration format. The complete config spec contains 1,245 options in total, grouped into 353 sections and 117 lists, many of which are deeply nested. The specification is mostly declarative, but the fact that it is constructed with Erlang code makes it possible to reuse common sections to avoid code duplication, and to delegate specification of the customisable parts of the system, e.g. the extension modules to the modules themselves, using Erlang behaviours. This way, if you fork MongooseIM and add your own extensions, you can extend the configuration by implementing a config_spec function in the extension module.

In the recent MongooseIM versions (4.0-5.0), the resulting top-level config options were stored in three different Mnesia tables for legacy reasons. This was unnecessarily complicated, and the first step towards version 5.1 was to put each top-level configuration node into a persistent term. Next, the internal format of each option was reworked to resemble the TOML tree as much as possible. Some options, like the extension modules, needed a more significant rework as they used custom ETS tables for storing the processed configuration. The effort resulted in a significant reduction of technical debt, and the code base was also reduced by a few thousand lines increasing readability and maintainability without any loss in overall performance. The table below summarises the whole configuration rework that has been undertaken over the last few releases.

Version3.74.0 – 5.05.1
Configuration file formatErlang termsTOML: nested tables and arrays with optionsTOML: nested tables and arrays with options
Conversion logicScattered around the codeOrganised, but complexOrganised, simple, minimal
Internal optionsArbitrary Erlang terms stored in Mnesia and ETS, additional custom ETS tablesArbitrary Erlang terms stored in Mnesia and ETS, additional custom ETS tablesPersistent terms with nested maps and lists. No custom ETS tables.

Config processing in action

As an example, let’s see how the first option in the general section, loglevel, is defined and processed:

general() ->
  #section{items = #{<<"loglevel">> => #option{type = atom,
                                               validate = loglevel,
                                               wrap = global_config},

The TOML configuration file contains this option at the top:

  loglevel = "warning"

The type atom means that it will be converted to an Erlang atom warning. It is then validated as loglevel – the validators are defined in the mongoose_config_validator module. The format_items step does not apply to options, and the process step is skipped as well, because there is no custom processing function defined. The option is then wrapped as global_config and will be put in a separate persistent term (it is not a nested option). It can be accessed in the code like this:


There are many more features, e.g. nested options can be accessed by their paths, and there are host-type-specific options as well, but this simple example shows the general idea.

Plans for the future

Going back to the topic of HTTP API, you can still find multiple REST API definitions in the default configuration file, which can be a bit confusing. This is because of the long evolution of the APIs and the backwards compatibility. However, for the upcoming release we are redesigning the API and command line interface (CLI), adding brand new GraphQL support, and making the APIs more complete and organised. Stay tuned, because we are already working on a new release!

If you would like to talk to us about how your project can benefit from using MongooseIM, you can contact us at and one of our expert team will get right back to you.

The post MongooseIM 5.1 Configuration Rework appeared first on Erlang Solutions.

by Pawel Chrzaszcz at June 08, 2022 10:30

June 07, 2022

Erlang Solutions

FinTech Matters newsletter | June 2022

Subscribe to receive FinTech Matters and other great content, notifications of events and more to your inbox, we will only send you relevant, high-quality content and you can unsubscribe at any time.

Read on to discover what really matters for tech in financial services right now.

With some much needed recalibration in tech markets, there are still some interesting innovations happening in fintech around payments and other areas that are encouraging as we head for H2 2022.

Michael Jaiyeola, Fintech Marketing Lead

[Subscribe now]

The Top Stories Right Now

Google brings virtual cards using autofill to Chrome and Android

Google launched its digital wallet and virtual cards at their recent I/O developer conference.

From this summer, shoppers with an eligible Visa, American Express, Mastercard or Capital One card can use Autofill on Chrome and Android with a temporary virtual number instead of the actual card numbers.

According to Google, this provides a more secure and seamless experience as you will no longer need to enter long card numbers and hunt for your CVV every time you want to check out online.

Find out more

Apple faces an antitrust charge over NFC payments

The European Commission has charged Apple for restricting access to the iPhone’s NFC chip technology for payments. The EC has looked into other payment services being unable to directly use the NFC chip for an app that might rival Apple Pay. While NFC is available in most payment terminals, you can only use Apple Pay for wireless communication for ‘tap and go’ payments.

The EC has sent Apple a ‘statement of objections’ informing the tech giant of a preliminary view that it abused its dominant position in markets for mobile wallets on iOS devices.

Read the full story

EU launches Digital Finance Platform

The European Commission has launched the EU Digital Finance Platform, a website designed to build dialogue between fintech players and supervisors.

The platform is live after being first announced as part of the EC’s digital finance strategy in 2020. The objective is to untie the fragmented nature of digital financial services across Europe’s Single Market to enable innovation and growth.

At launch, there are two main parts to the platform. The first is an observatory offering interactive features such as a Fintech Map, events and a section where users will be able to share relevant research material.

The second section is a gateway which will act as a single access point to supervisors, with information about national innovation hubs, regulatory sandboxes and licensing requirements.

Read the full story

More content from us

MongooseIM is our real-time, mobile messaging platform for one-one messaging, group chat and social features. Discover how we helped NayaPay (the chat-led mobile payments platform) add MongooseIM as a secure messaging backend.

Read the case study

There’s a lot going on in payments right now from growth in real-time, embedded and integration of cryptocurrency. We cover this and more in our two-part blog series.

Read the blog post

SumUp is another successful fintech company built using Erlang technology. We spoke with their Elixir engineering manager Daniel Pilon, who is working on building SumUp Bank, about software engineering principles in the financial services domain.

Read his article

ElixirConf EU 2022 – coming soon as a hybrid event 9-10 June in London and online. The Elixir programming language is used to build the systems at the likes of Solaris Bank, TaxJar (a Stripe company) and Memo Bank.

Find out more

Erlang Solutions bitesize

We’re hiring! Looking for your next developer gig? You should first check out our open positions here. Also, here’s a handy list of the top 25 CTOs in fintech that is worth a look too!

RabbitMQ Summit – is happening in person at CodeNode in London on 16 September as an online and in-person event. RabbitMQ is a technology found in nearly all fintech stacks and this conference is the go to place to hear from the experts who are using it. Find out more.

To make sure you don’t miss out on any of our leading FinTech content, events and news, do subscribe for regular updates. We will only send you relevant high-quality content and you can unsubscribe at any time.

Connect with me on LinkedIn


The post FinTech Matters newsletter | June 2022 appeared first on Erlang Solutions.

by Michael Jaiyeola at June 07, 2022 15:48

Modern Software Engineering Principles for Fintechs by Daniel Pilon at SumUp

Daniel Pilon is a Software Engineering Manager at SumUp. Since 2007 he has worked across several industries before arriving in the fintech space. He has experience in many programming languages, such as C#, Java and JavaScript but since discovering Elixir and the power of functional programming a few years ago, he hasn’t looked back.

Right now he is building SumUp Bank, a complete digital banking solution to empower small merchants and simplify their financial needs.

We spoke to Daniel about his experiences using Elixir in the financial services domain, here’s what he had to say.

Why are system resilience and scalability so important in financial services?

Financial services handle, in different ways, other people’s or companies’ money. It’s vital for the success of a fintech company that those clients can trust that their money will be handled without any unpleasant surprises and that they can use the services any time they need. For fintechs to be able to give this level of trust to their clients, they must start by first trusting their operation. 

Given that fintechs don’t have the same level of resources and size as banks and incumbent Financial Services Institutions (FSIs) that have been established for decades – any problem that arises could be critical and difficult to determine and recover, should it need manual intervention. For these reasons, it’s really crucial to the business that the services that support the operation are up and available for the vast majority of the time.

What are the commercial risks of system downtime to fintechs?

There are many reputational and business risks that may arise from downtime or intermittence in availability. For example, if you’re a fintech that provides payment services, it might cause the client to be unable to pay for their shopping cart in a supermarket or online. When this occurs, clients lose confidence and will either stop using your services or even seek legal ways to repair their damage. This leads to reputational damage to your brand and metrics such as your Net Promoter Score (NPS).

There are several practices that are required in order to keep services up and to anticipate problems, such as building robust observability, having a comprehensive automated test suite, keeping track of MTTD (mean time to detect), MTTR (mean time to recover) and MTBF (mean time between failures) and always iterating over those metrics in order to learn what they say about the opportunities to enhance the company infrastructure.

Designing for system scalability with Elixir

When it comes to handling demand and being able to scale, it’s crucial that the company can statistically determine business expectations so that the engineering teams can decide the best approach for development. Without numbers it’s possible that engineers will either be too optimistic about the load the applications supports or will try to optimise early for a load that shouldn’t be expected in the short or mid-term, delaying deliveries and losing time to market. With the right information, engineers are empowered to decide what is important for the current status of the company and what can be done later, as well as having stress and load tests that can show how the system is behaving against the expectations.

What I like the most about using Elixir/Erlang programming languages is the aspect that it’s often easy to design applications that are resilient and always evolving. Being idiomatic and understanding the way workers, supervisors, and the supervision tree work helps in system design so that eventual crashes are handled properly and without side effects. Also, it is easy to modularise different domains so that, should expectations change and more load is required, it’s simple to increase the processing power or even separate into a different application that can scale more specifically.

Why did SumUp use the Erlang ecosystem?

The Erlang ecosystem has been empowering SumUp since day one. We started as a payment solutions provider for small merchants and the core services that enabled us to scale our transaction processing power to the millions were mostly written in Erlang. 

Since SumUp’s mission is to enable small businesses to have access to digital payment solutions and most of our clients are transacting small amounts in their transactions, we needed confidence that we could take those transactions. They are vital to our client’s businesses, so they are vital for SumUp as well. 

Erlang and the BEAM VM gave us the highest level of confidence in this aspect, given that it is battle-tested for decades, it’s built to scale readily, it allows us to make concurrent processing easier and it has fault tolerance as a first-class citizen. But, most importantly, we need platforms where I/O intensive operations are handled well by design, so we can confidently process as many parallel payments as we need. The BEAM VM checks the boxes for all of the aforementioned aspects.

How they used Elixir to build a fintech backend for SumUp Bank

Over time SumUp grew exponentially and now we are in a position of increasing our offer of financial products to clients, adding services such as digital banking and credit/lending. The building of a digital bank in Brazil started in 2019 when it was decided that the core language for our backend services would be Elixir. 

With Elixir, we continue to have everything that makes Erlang so powerful, but with a modernised syntax and mature tooling for production usage. At SumUp Bank we currently run Elixir in a microservices architecture, running on AWS with Kubernetes via EKS. Our services are backed by Postgres databases when needed, with data exposure via RESTful APIs. For asynchronous communication, we use AWS SNS/SQS. Our clients all connect to BFFs (Backends for frontends) built using GraphQL. This is all made possible by robust libraries that are available as open-source for Elixir applications, such as Phoenix, Ecto, Absinthe, ExAws and Broadway. 

Over these last two years of running a bank in production, we learnt that Elixir/Erlang was the best choice we could have made. We were able to start small, quickly run in production and scale as we needed. For example, it was not until we reached some scale that we started to prioritise asynchronous communication between services and it was simple and straightforward to move to this type of architecture with the help of a library like Dashbit’s Broadway. The Elixir/Erlang applications being lightweight and fault-tolerant also give us a lot of flexibility to use computational resources wisely, lowering our costs with infrastructure, while still providing outstanding throughput.

The future of open-source in fintech and the Elixir community 

Open-source has dramatically changed fintech and many other industries for the better. Personally, I can’t imagine us living in such a fast-changing and diverse world of options if not for the popularity and adoption of open-source. The speed at which companies are now able to deliver new technologies without the burden of having to reinvent the wheel or pay a huge amount for proprietary software is one of the foundations for such a transformation.

In the Elixir community, contribution to open-source is just an organic aspect of how things work. When I started my journey, the community was something that really caught my attention. The energy and the willingness to help each other were way advanced when compared to my days with .NET or Java. Today, those technologies also have strong communities contributing to open-source, but it’s something that most people had to learn while it was happening.

With Elixir, most of the general purpose libraries are open-source and they are very mature. I can cite that my experience writing code for financial services using Phoenix, Broadway, Ecto and others have been joyful and every time I had to seek help with the appropriate usage or with issues, I have found it. The power and synergy of the Elixir community are impressive as there’s always somewhere one can reach out in order to seek help or contribute.

Of course, there are some areas where I think we still need to improve. For instance, some of the libraries that make the ecosystem stronger and that can ease the adoption of the language are not always highly maintained. The ExAws library is a great library to interact with AWS services. However, it was on the verge of being abandoned around 2020, when the original maintainer left the project. Now another maintainer stepped in and assumed the responsibility for the project, so it’s receiving contributions and being released.

Also, financial systems commonly have to handle integrations with third parties which are not always straightforward RESTful APIs, or some other more commonly used technology for integrations. Integrations based on positional files, or SOAP are unfortunately still there. More established ecosystems, such as Java, have plenty of options for implementations in these scenarios, while in Elixir, given that it’s more recent, it’s not always the case. However, from my experience, this is a minor issue weighed against all the positives the language brings and hasn’t been a massive burden when this sort of requirement has arisen.

It would be great if there were more companies using Elixir willing to give back to the open-source community for Elixir. Either via opening their internal libraries or via incentives for their developers to contribute when certain cases are not covered by existing libraries. Also, sponsoring projects that are open to receiving it would be a great way to allow the maintainers to have more focus on their libraries getting better and better.

About SumUp

SumUp is a fintech company founded in 2012. Primarily a leader as a payment services provider, it has grown to be a multiproduct company, now offering other financial services, such as digital banking and lending.

SumUp tech is all built as cloud-native applications, running mostly in Erlang/Elixir, Ruby and Go.

A big thank you to Daniel for sharing his thoughts and experiences of Elixir, the Erlang ecosystem and innovation in fintech, you can connect with him via his LinkedIn profile

If you would like to speak about collaboration around telling the stories of using Elixir or Erlang in fintech, please reach out to Michael Jaiyeola (Erlang Solutions’ Fintech Marketing Lead) by email or on LinkedIn.

The post Modern Software Engineering Principles for Fintechs by Daniel Pilon at SumUp appeared first on Erlang Solutions.

by Daniel Pilon at June 07, 2022 15:46

June 05, 2022

The XMPP Standards Foundation

The XMPP Newsletter May 2022

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of May 2022.

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, especially throughout the current situation, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

Newsletter translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

XSF Announcements

XSF and Google Summer of Code 2022

XSF fiscal hosting projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects:



Bifrost bridge fork at implemented offline messages support for matrix rooms accessed via the bridge XMPP users connecting to Matrix rooms via Bifrost bridge did not get offline messages, because group chat (MUC) history support was not implemented in the XMPP server implementation of the bridge (based on xmpp-js). Thanks to great work done by Maranda, we can now have history and also message history (MAM) support for Matrix to XMPP bridged rooms. See this post for complete instructions to use this feature.

The JMP Newsletter announces a new release of the Cheogram Android client, SMS-only phone number ports, deeper integration with Snikket, and a new project for social instance hosting.

Software news

Clients and applications

Gajim 1.4.0, 1.4.1, 1.4.2, and 1.4.3 have been released! After more than a year of development, it’s finally time to announce the release of Gajim 1.4! Gajim 1.4 series comes with a completely redesigned message window and conversation management. Workspaces allow you to organize your chats to keep matters separate where needed. These changes were only possible by touching a lot of Gajim’s code base, and we appreciate all the feedback we got from you.

Gajim’s new user interface

Psi+ portable 11.5.1627 (2022-05-21) and Psi+ installer 11.5.1629 (2022-05-31) have been released.

Go-sendxmpp 0.5.0 with Ox (OpenPGP for XMPP) improvements has been released, followed by a bugfix release 0.5.1.

The project has released a small tool hosted on their Cheogram infrastructure to easily compute an equivalent Matrix ID for your Jabber ID via known bridges.

Cheogram ID conversion


ejabberd 22.05 has been released. This version includes five months of work, 200 commits, including many improvements (MQTT, MUC, PubSub, …) and bug fixes.

mod_opt_type(bosh_service_url) ->
    econf:either(auto, econf:binary());
mod_opt_type(websocket_url) ->
    econf:either(auto, econf:binary());
mod_opt_type(conversejs_resources) ->
    econf:either(undefined, econf:directory());
mod_opt_type(conversejs_options) ->
    econf:map(econf:binary(), econf:either(econf:binary(), econf:int()));
mod_opt_type(conversejs_script) ->
mod_opt_type(conversejs_css) ->
mod_opt_type(default_domain) ->

mod_options(_) ->
    [{bosh_service_url, auto},
     {websocket_url, auto},
     {default_domain, <<"@HOST@">>},
     {conversejs_resources, undefined},
     {conversejs_options, []},
     {conversejs_script, auto},
     {conversejs_css, auto}].

Jackal 0.60.0 has been released.


python-nbxmpp versions 3.0.0 to 3.1.0 have been released, bringing support for Message Moderation, Bookmarks extensions, and many bug fixes.

Extensions and specifications

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

By the way, features a new page about XMPP RFCs.


The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs proposed this month.


  • Version 0.1.0 of XEP-0465 (Pubsub Public Subscriptions)
    • Accepted by vote of Council on 2022-04-13. (XEP Editor (jsc))
  • Version 0.1.0 of XEP-0466 (Ephemeral Messages)
    • Accepted by vote of Council on 2022-05-03. (XEP Editor (jsc))


If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.


  • Version 0.3 of XEP-0365 (Server to Server communication over STANAG 5066 ARQ)
    • Make use of SLEP Streaming service, which was not available for 0.1. This provides a better service mapping than direct use of 5066 and provides compression. (sek)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.


  • No XEPs advanced to Stable this month.


  • No XEP deprecated this month.

Call for Experience

A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

  • No Call for Experience this month.

Spread the news!

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Therefore, we would like to thank Adrien Bourmault (neox), anubis, Anoxinon e.V., Benoît Sibaud, daimonduff, emus, Holger, Ludovic Bocquet, Licaon_Kter, Martin, mathieui, MattJ, nicfab, Pirate Praveen, Ppjet6, Sam Whited, singpolyma, TheCoffeMaker, wurstsalat, Ysabeau, Zash for their support and help in creation, review, translation and deployment. Many thanks to all contributors and their continuous support!

Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations


This newsletter is published under CC BY-SA license.

June 05, 2022 00:00

June 01, 2022


Gajim 1.4.3

Gajim 1.4.3 comes with some exciting news: Native emoji rendering on Windows! Want to customize your workspaces? Why not use emojis as well? As always, lots of bugs have been fixed in this release.

What’s New

This release is all about emojis. Gajim is based on GTK, a multi-platform framework for graphical user interfaces (GUI). For rendering text, GTK relies on Pango and underlying, on Cairo. On Windows, Cairo wasn’t able to render colored emojis.. until now! The latest Cairo release enables Gajim to render emojis in their full spectrum of colors on Windows 🎉

In consequence, we can use GTK’s native emoji chooser on Windows, and we don’t have to rely on workarounds to display emojis in chat messages. Without these workarounds, Gajim’s performance increased significantly on Windows.

While figuring out how to get enable all this on Windows, we added a nice little feature as well: You can now add emojis to workspaces!

Workspaces with emojis

Workspaces with emojis

Windows users please note: Windows builds are now based on Python 3.9, which does not run on Windows 7 or older.

Fixes and improvements

Several issues have been fixed in this release.

  • AppPage: Now shows plugin update notifications
  • ChatList: Middle mouse click for closing a chat has been added
  • DirectorySearch: A ‘Start Chat’ menu item has been added
  • Group chat roster: Visibility is now stored
  • Jingle file transfer widget is now smaller
  • Width of Contact Info and Group Chat Details elements have been unified
  • Workspaces: Move to new workspace functionality has been added
  • GStreamer is now an optional dependency again
  • Windows installer has been simplified

Have a look at the changelog for the complete list.


As always, don’t hesitate to contact us at or open an issue on our Gitlab.

June 01, 2022 00:00

May 31, 2022


Newsletter: Togethr, SMS-only Ports, Snikket Hosting

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

This month our team launched a new product to help people looking to take even more control of their digital life by hosting their own social media instance.  Read about Togethr, what it is today, and a glimpse of our future plans.

JMP now supports SMS-only ports.  Landlines and most numbers with VoIP providers (but not numbers with most mobile carriers) are eligible to have JMP provide SMS/MMS messaging services for the number, while voice and other services would remain with the current carrier.  This feature is in Alpha, contact support if you are interested.

This month also saw the release of Cheogram Android 2.10.6-1.  This version merges in the latest upstream release of Conversations, as well as fixes for the contacts integration and playback for some media types (most notably 3GPP videos).

Finally, our integration with Snikket hosting is coming along.  For all of this year JMP customers have been able to get into the Snikket hosting beta with the promise of never having to pay for the JMP-using Jabber IDs they host there.  Now, JMP customers no longer need to contact Snikket staff to be put into the regular beta queue.  Contact JMP support and we can set you up with a Snikket instance directly.  We will continue to work on this integration until someday it becomes a fully self-serve part of signup.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at May 31, 2022 13:30

May 28, 2022

The XMPP Standards Foundation

XMPP & Google Summer of Code 2022: Welcome new contributors!

XSF and GSoC 2022 Logo

The Google Summer of Code 2022 is about to lift off and coding starts soon! The XSF has not just been accepted (again!) as a hosting organization for XMPP projects, we also can welcome two new contributors who will work on open-source software projects in the XMPP environment! We have updated our designated web-page for the Google Summer of Code 2022 accordingly.

The XMPP projects at Google Summer of Code 2022

So, please welcome Patiga and PawBud as new contributors! It is really great that you chose XMPP for your coding adventure!

  • Patiga will work on more flexible file transfers in Dino. Mentors will be fiaxh and Marvin W. - many thanks to both of you!
    • Resource-wise, messenger applications tend to be on the lightweight side of the spectrum. This drastically changes when file transfers are added to the equation. File transfers can arbitrarily increase resource-usage, both on network and data storage aspects. To alleviate this issue, stateless file sharing empowers the user to make informed decisions on which files to download. Deliverables:
      • Unified handling of HTTP and Jingle (peer-to-peer) file transfers
      • Enable sending metadata alongside files
      • Thumbnail previews for images
  • PawBud will work towards adding support for A/V communication via Jingle in ConverseJS. Mentors will be JC Brand and vanitasvitae - many thanks to both of you, too!
    • The idea is to add support for Audio & Video communication through the Jingle protocol. The goal is to create a Converse plugin that adds the ability to make one-on-one audio/video calls from Converse. The audio/video calls will be compatible with other XMPP clients.

Feel free to spread the word via Mastodon or Twitter.


Checkout our media channels, too!

Looking forward!

–The XSF Organisation Admin

May 28, 2022 00:00

May 26, 2022

Erlang Solutions

WombatOAM & the machine learning library

WombatOAM, the powerful operations, monitoring, and maintenance platform has a new machine learning library to assist with metric prediction. Learn about the main features, the algorithms used and how you can benefit from it together with WombatOAM’s new UI from Mohamed Ali Khechine, Tamás Lengyel and Janos Sandor.


Mohamed Ali Khechine: The theme of today’s webinar is the way we have created a machine learning library for metric prediction for the monitoring and maintenance tool WombatOAM which we develop at Erlang Solutions. We built this tool using Elixir. This will be presented by the next host and for now, I just wanted to say a few words about the Wombat tool itself, WombatOAM is a software used to monitor BEAM-based systems. What I wanted to show today are the main capabilities. I will go through them and I will explain bit by bit why are they used by WombatOAM and how we came up with these tools in our software.

Wombat is a software that you self-host and you can use it for monitoring systems that vary from RabbitMQ to Elixir, Phoenix, and, of course, Erlang itself. So we have many ways to install Wombat in your system, AWS, Kubernetes or simply having it locally. So I will share this documentation after the webinar is finished, but I just wanted to tell you that depending on how your system and the environment are, Wombat will be able to adapt and use all of the capabilities there.

When you install Wombat and start it, you just have to set up a new password and that’s it. Then you can access the interface. 

You should be able to access the node that you wish to monitor through the internet or through a network that you have set up locally, or anything which allows distributed communication through TCP. So you just need to specify the node name and the cookie, and then Wombat is able to add it. For example, for this webinar, I have set up a RabbitMQ node and I have set up four Erlang nodes. 

When you add a node, what happens is that Wombat will automatically start these GenServers agents on the specified node and these GenServers are responsible for getting the metrics and the capabilities to the Wombat node.

So for example, we have here a group of agents or GenServers that are mainly focused on bringing the BEAM specific metrics that we can, for example, see here.

By default, out of the box, Wombat will offer you all of the specific metrics that are important to monitor for the health of your node, but also a few others that are a bit more like the health of the machine. The node where the application is running, and of course, we have a few extra metrics that I will explain later, but we do have, a more in-depth view of the processes that are running. For example, we can have information about memory usage, of all of the processes that are running in this specific application. We also have a similar reduction number of all processes running in a specific application.

So all of this information is fine-grained to allow the user to get more information out of the node and the same thing is happening for nodes that, for example, have the RabbitMQ application.

The next part of the system is that Wombat by default collects also all of the logs that are generated in the nodes that you wish to monitor and all of these logs are, of course, stored in your machine, and you can filter them, you can explore them, and you can also if you wish to send them to another log-aggregating tool like Graylog or anything like that, you can also push them there in the same way. I wanted to show, for example, how alarms are shown.

We can monitor specific processes, for example. Let’s say you have a very important process that you use for routing, and you want to monitor that specific process for its memory usage, and its message queue length. What you should do is you should create a monitor for it, and let’s see how we can trigger it. Let’s try to push some metrics. 

Wombat will detect that a process has an issue with the mailbox, this by default will create an alarm with all of the information that is necessary to debug this issue as soon as it happens. Wombat first needs to check the number of messages that are stuck. If it’s for example, 100 and then it drops down, it would not be triggered because they were quickly resolved. But if it just gets stuck then Wombat will automatically create an alarm and it will be shown within a minute of the peak of the message queue. 

These are the alarms that are generated by default by Wombat. What you can see above is that you can get information about the Pid of the process where the issue occurred and you can get a sample of the messages that are stuck in the message queue so that you can at least know what type of message is making the process get stuck in that phase.

The processes are all listed and sorted by message queue or reductions. For example, you can get information about the process that has the highest messages in their mailbox.

Wombat allows you also to have a remote shell towards your monitored node. You can also explore the Mnesia and ETS tables. 

In case you want to send the information that you saw now to other tools, you can have a choice here of which system you already use. For example, I have already set up the Grafana dashboard with the metrics coming in from Wombat. What I did is basically, set up Wombat to report the metrics in the Prometheus format which is shown here.

All of this is configurable, of course. I didn’t speak about it now because this presentation is mainly going to be about the machine learning aspect of metric predictions. But I wanted to show you that from the documentation, you can explore, for example, all of the metric capabilities.

Please find the WombatOAM documentation here.

We also monitor and have integration with Phoenix, Elixir, RabbitMQ, Riak, MongooseIM, and so on. The full list of available metrics will be in the documentation. 

The next presentation is going to be about the machine learning tool. 

Arnold- the machine learning tool

Tamás Lengyel: We started developing Arnold about a year ago. First, we started building it for Wombat only, but later it became a separate component from Wombat.

We are using Arnold for time-series forecasting, anomaly detection, and analysing the incoming metric values using the Axon library.

First, I want to mention the Nx library which is the native tensors implementation for Elixir. You can think of it as the NumPy for Elixir. Nx has a GPU acceleration that is built with Google XLA, and it’s natively implemented inside Elixir. Axon was built on top of the Nx library. Therefore, it has a Nx-powered neural network interface and this is the tool that we used for creating Arnold. You can think of it as the TensorFlow of Elixir. Currently, both libraries are heavily in development, and while Nx has a stable release version, 0.1.0., Axon does not have one yet. So no official release for that.

Main features

What are the main features of Arnold? As I mentioned, it is a separate component. It has a RestAPI for communication with the outside world. So not only nodes, Elixir or Erlang nodes, can communicate with it, but a Python application or any kind of application that can make the RestAPI calls can communicate with Arnold. We implemented multiple prediction methods and we’ll talk about them later. We call them simple and complex methods. We have dynamic alarms and load balancing, and inside Wombat, we implemented metrics filtering as well, not to overload the tool.

The structure

It’s a simplified structure now. There are three main components of Arnold which is the first.

One is the sensors where the metrics are stored and all the incoming metrics are gathered and preprocessed before sending them to the neural network and the training algorithm.

We are storing the incoming metrics in Mnesia with the help of a separate record library called Memento. We have three tags for each metric which are hourly, daily, and weekly. Each tag has a constant value, which is the number of values that we should gather before we start the training algorithm. Wombat sends the hourly metrics every minute. Then we make an average of the last 15 minutes of metrics and then we call that one daily metric. When we reach a certain threshold defined by the tag, we are going to send it to the neural network for training.

The training algorithm takes Axon models. The prediction methods are decided on whether we have a trained neural network or not. That’s how we can determine if we should use the Simple or Complex method. 

The complex ones are the neural networks and the simple ones are statistical tools and models. Mainly, we use them for instant predictions and analysis tools for the alarms.

What algorithms are used?

For forecasting, we use Exponential smoothings, the single, the double, and the triple. We use the single one if we cannot detect any trend or seasonality. The double is used when we can detect the trend component only, and we use the triple Exponential Smoothings when we detect seasonality as well. For trend detection, we use the Mann-Kendall Test. For seasonality detection, we use pattern matching. We are trying to find a correlation between a predefined small seasonal value and our current values. If we have any kind of correlation, then we say that we have seasonality in that metric.

When we have enough data, we send it to the training algorithm and then we can switch to the complex neural network-based predictions and forecasting. For alarms, we use linear correlation to see when a metric has any kind of relationship with other metrics, so that it could be easier to find the root cause of a possible problem.

Feeding data into Arnold

If you have a node, in our case, Wombat, which uses these API calls, then we have to use the following query parameters, the node ID, and the metric name. In a JSON body, we specify the type, the value, and a unique timestamp. So the type can be any kind of type that is present in Wombat as well. But it can be a single integer or float. It can be a histogram duration, meter, or spiral, and you can send it in a row format. Arnold will handle the processing of it. In the same way, as we input data into Arnold, we can fetch data as well with the prediction route.

We specify the node ID, the metric name, and a tag which can be hourly, daily, or weekly. We also specify a forecast horizon that will be covered later. Arnold will send back a JSON with the prediction values as a four-element list of lists. So it has a list and it has four separate lists inside of it. This way it’s easier to show it on via a graph library like Highchart in JavaScript. We have an analysis of whether we can raise an alarm or not, a message about the value if it crossed the expected range along with the amount, and, of course, the correlation data.

The training process

The first step is to gather data. Arnold’s manager sends the data to the correct sensors with a timestamp-value two element tuple. The manager is responsible for getting the correct sensor from the load balancer. One sensor is responsible for one metric. So if we send 200 metrics, then we are going to have 200 sensors. A sensor stores the data for that given tag until their threshold is reached.

For example, we start training for the hourly metrics, which are sent from Wombat every minute, when we gather 180 metrics. That’s three hours of data in total. But it can be increased to five hours, six hours. It depends on the user. These sensors are saved every five seconds by the manager. Also, the manager does the training checks. Basically when we reach the tag threshold, the manager checks or marks that sensor for training. Then we start the training algorithm or we start the training process where the first step is the data preprocessing.

First, we have to extend the features with the time of tags. So you can see here the logarithmic scale of the raw timestamp value. As you can see, it’s increasing.

It’s not usable or understandable for the neural network. We have to transform that with the sine and cosine functions. As you can see here, the blue line represents one hour. So two peaks are considered as one hour. 

Let’s imagine there is a value that is sent at 11:50. That value after transforming the data is going to be -0.84, and if we go forward in time, then we will reach 12:50, 13:50, the transformed data will always be the same. It’s always going to return with this transformation, -0.84 so that the neural network can translate it whether the incoming values are following an increasing or decreasing trend, or whether it has seasonality in a time period of an hour, and so on. Of course, we did that for the hourly metrics, the daily metrics which is the red line, and for the weekly metrics as well which is the yellow one.

The next step is splitting the data. We use 80% for the training and as an option, we can use 20% for testing purposes to see how our training model is performing. After splitting the data, we have to normalise them. We use the mean and the standard deviation for normalisation. 

The algorithm or the formula that we are using. 

From each value, we are subtracting the mean and then dividing it by a standard deviation. We use Nx built-in functions for that, the Nx mean, and the standard deviation, which was not present in version 0.1.0 but will be in the next release. Currently, we are using our own implementation for standard deviation.

The next step is creating the dataset. So we are gonna zip the features with the target values. Of course, we have optional values here like the batch size which is by default is 32 and we can shuffle the data as well for better training purposes. The last step is to send it to the training algorithm. But first, before sending, we need to have a model. We have SingleStep, and MultiStep models and they are easily extendable. Currently, we are using a single-step dense model, but there is a linear model as well. 

After we send our dataset, and data to the training algorithm, we use the model to start the training on the client’s machine. In this algorithm, this function will return the model state. After that, we can enter the test function as well. So finish time depends on the number of values we have and the performance of the machine it is running on. It can take from 10 minutes to an hour to train the neural network.

Here you can see what it looks like inside Wombat. 

So as you can see, here we have the actual values for different metrics, then we have the expected ranges for the predictions. 

You can see here what happens if one metric or two crosses their expected range. Here we are using Exponential Smoothings and they are calculated all of the values before the current timestamp, including the current one as well. It’s going to adapt as it goes. As the actual values are going down, the predictions are going to follow that trend. So two alarms raised because we have two metrics that crossed their expected ranges.

From Wombat, we can simply configure it. We have a port, a forecast horizon and as I said, we can filter the metrics that we would like to send for training. The port is necessary for the RestAPI communication. The forecast horizon defines how many metrics we want to forecaste after the current timestamps. So in the case of Wombat, hourly metrics mean that, if we set the horizon to five, it means we will have five minutes of metrics ahead of the current timestamp. For the daily one, as they are calculated every 15 minutes, it means that a forecast horizon with the value of 5 results in 5 times 15, 1 hour and 15 minutes worth of metrics ahead of the current timestamp. For the weekly, it means, 5 hours ahead of the current timestamp because the weekly metrics are calculated every hour. So we will have a lot of data for those types as well.

The resources we have used are the TensorFlow documentation and tutorials because it’s really easy to integrate those models and concepts into Axon. Of course, I used  Axon and Nx documentations, “Forecasting: Principles and Practices” book. I used the second edition, but the third edition is available as well. It’s really interesting and it tells you a lot of cool stuff about how they did time series, data forecasting when neural networks were not available, how to use with it statistics like ARIMA, seasonal ARIMA, exponential smoothings, how they compose the data into a separate component. 

It was very interesting to read I learned a lot of cool stuff that was very helpful in the creation of Arnold. I used the Erlang Ecosystem Foundation Slack Channel and Machine Learning Channel where I could contact the creators of both libraries. And the time series forecasting methods from influx data because that’s where the idea came of instant friction to combine exponential smoothings with neural networks.

What are the plans for Arnold?

We are cleaning it up and we are implementing unit tests as well so that we can automate the testing process and the release process. We would like to have Prometheus integration to have an optional training method. So we won’t have dynamic learning, but with the help of this integration, we can instantly send a lot of data to Arnold and we don’t have to use simple predictions. We can immediately start the training and we can use neural networks. 

We are open sourcing the project which is available now on GitHub and also in the documentation, and a Wiki guide on how you can contribute.

We have a long README where I’m trying to gather how Arnold works and how you can use it, how to compile it from source code or you can just download an already prepackaged compressed file. We have the wiki as well on how you can contribute and the structure is still under development. And, of course, we have the whole documentation for Arnold at 

We can see Arnold running in the background. As you can see, we have the expected ranges, the actual values, and the predictions as well. And I’m just gonna do a quick garbage collection so that it will trigger alarms and I can show you one.

If you want to use Arnold, in Wombat, you just have to manually configure it at the data export page and you don’t have to download Arnold separately or start it separately. It’s all handled by the integration. So basically, no additional manual work is needed for that. 

Two metrics crossed their expected ranges and we can see two alarms were raised. And we can see that a metric is out of range, the detailed information can be found in the additional info that process memory is currently lower than the expected range by that amount of bytes. We can see the total memory as well and as you can see, we have a positive correlation… For the process memory, we have a positive correlation with total memory. And for the total memory, we have a positive correlation with the process memory. 

Wombat’s new UI

Janos Sandor: There will be a new UI for Wombat. We’ve been working on this since the old UI became overwhelmed, too complex, and hard to maintain. Contained a lot of deprecated JS libraries and was hard to make proper E2E test cases. The new UI will use the Angular library and official angular-based extensions. This is almost built from the ground and we wanted to keep only the necessary parts.

The base design is provided by the Angular Material official add-on library. It has lots of built-in animated and nice-looking web elements with icons. Almost everything could be imported from its modules that we used in the previous UI. And it will be easier to build custom themes later on.

Instead of Javascript, all the source files are written in Typescript. Its compiler is a huge help in finding bugs and avoiding race-conditions.

We kept the “front-page” dashboard view. We can combine metrics, graphs, alarms, and statistics here.

We can move these panels, we can resize them, and can add more panels, or we can switch to different front pages.

The side menu provides the primary way of navigating between the main views. (Before we had the top toolbar, but now we have a side menu.)

On Topology we can see the information about the nodes.

Logs can be listed on another view, called ‘Logs’. We can set up an auto-refresh, and filter the logs. We can check them. If the log message was too long, it would be split into more parts and the user could load them all or one-by-one.

We have a different page to check the raised alarms.

The new dashboard has a feature for the right-click (context) menu. The context menu is different on each page. There are some persistent menu items, like adding nodes or getting to the configurations.

Metrics page looks almost the same as before. We can visualise metric graphs just like on the old UI. There is a live-mode to update the graph continuously.

We have the tools page. We can check the process manager. We can see the process information and also we can monitor processes. Of course, we have a table visualizer. We can read the content of the ETS Tables.

In the configuration menu, we have the ‘Data explore’ menu where we can configure the integrations.

The new UI has another new feature. It has a dark mode. We can switch to dark mode.

 We can create new families, manage the nodes, and remove nodes or families.


Mohamed Ali Khechine: We rely on telemetry to expose certain metrics. For example, by default, Wombat gets its Phoenix metrics by the telemetry ones and similarly to the Ecto. We also have a telemetry plugin that creates metrics that you customised through telemetry. So basically, if you have telemetry metrics which are events that expose values and have specific event names and so on, Wombat will create metrics based on them and they will, of course, be shown here. So in the same way as exometer, when you create an exometer metric, Wombat will also pick it up and expose it as a metric that you can subsequently basically expose in the Prometheus format and, of course, show it in the Grafana or anywhere else. I hope I answered the question.

The post WombatOAM & the machine learning library appeared first on Erlang Solutions.

by Erlang Admin at May 26, 2022 10:00

May 25, 2022


Gajim 1.4.2

As promised earlier, releasing new Gajim versions is now much easier! 🎉 Gajim 1.4.2 comes with better performance and an important bugfix. But there is more! After popular demand, we brought back the calendar for browsing history.

What’s New

Mainly it’s one bug we fixed in this release. We improved how Gajim manages chat messages it displays to you. Before this improvement, messages would sometimes only show after changing focus or resizing the window. This issue has been fixed.

After popular demand, we brought back the calendar for browsing history. The new search view now offers a calendar button which lets you choose which day Gajim should jump to.

Windows users please note: Windows builds are now based on Python 3.9, which does not run on Windows 7 or older.


Several issues have been fixed in this release.

  • Fix for marking messages as read by the recipient when sending messages from another device
  • Fix for offline contacts not filtered out in the contact list
  • Fix for sorting contacts by by status in the contact list
  • Status messages are now correctly kept through restarting Gajim, if enabled
  • Fix canceling Jingle file transfers
  • Improve behavior for Send Button in connection with pressing Enter

Have a look at the changelog for the complete list.


As always, don’t hesitate to contact us at or open an issue on our Gitlab.

May 25, 2022 00:00