Planet Jabber

January 16, 2025

Erlang Solutions

MongooseIM Round-Up

This is your one-stop shop for discovering everything Erlang Solutions offers in MongooseIM. From insightful blog posts to services and compelling case studies, this round-up shows how MongooseIM can evolve your messaging infrastructure.

So, if you’re a business leader looking to understand scalable messaging solutions or a decision-maker keen to learn from real-world examples, read on to discover how MongooseIM can support your strategic goals.

MongooseIM Blogs

Our experts have crafted a series of blog posts to help you explore the full potential of MongooseIM. 

Whether you’re curious about its newest features, looking for strategies to optimise your messaging systems, or seeking ways to strike a balance between innovation and sustainability, there’s something for everyone in these insightful reads.

MongooseIM 6.3: Prometheus, CockroachDB, and More by Pawel Chrzaszcz

Discover the latest updates in MongooseIM 6.3, including integration with Prometheus for advanced monitoring and the robust scalability of CockroachDB. This blog looks deep into the enhancements that make MongooseIM a leader in real-time messaging. Learn how these features can support your business’s growth by ensuring reliable, future-proof infrastructure.

Learn more about these features and how they can support your business’s growth.

5 Ways MongooseIM Provides Scalable and Future-Proof Messaging

Why should your organisation choose MongooseIM? This blog outlines five compelling reasons. Explore how MongooseIM’s architecture is designed to meet the demands of growing user bases and evolving requirements, making it the ideal choice for forward-thinking businesses.

Discover more insights in this post.

Balancing Technical Debt and Innovation by Nelson Vides

Managing technical debt is a balancing act that every tech leader faces. This article offers actionable insights into how MongooseIM helps teams strike the right balance between maintaining existing systems and driving innovation. Learn from our experience working with leading organisations to make informed decisions about your technical strategy.

Check out the full article to learn from our experience working with leading organisations.

MongooseIM Services For Your Messaging Infrastructure

At the heart of MongooseIM lies a host of services designed to optimise, scale, and simplify your messaging infrastructure. Here are some of our services that are tailored to meet the needs of modern businesses and enhance system performance.

MongooseIM Healthcheck

Ensure your MongooseIM deployment is optimised for performance, scalability, and resilience. Our MIM Healthcheck service provides a thorough assessment, offering recommendations to improve system stability and enhance user experience. 

Explore MongooseIM Healthcheck in detail.

MongooseIM Autoscaler

MongooseIM Autoscaler helps your system adapt to traffic spikes with ease. Whether your user base grows gradually or you face sudden surges, our Autoscaler ensures seamless performance, reducing downtime and costs. 

Learn more about how Autoscaler keeps your infrastructure reliable and cost-efficient.

tryMongooseIM

Curious about MongooseIM but not sure where to start? tryMongooseIM offers a hands-on introduction to the platform, allowing you to test its features and evaluate its potential for your business before committing. 

Find out how to get started with tryMongooseIM today.

MongooseIM Case Studies: Success in Action

Real-world success stories speak volumes about MongooseIM’s impact. Discover how leading organisations have leveraged MongooseIM to transform their messaging systems, achieving scalability, reliability, and innovation in competitive industries.

Pando

Pando, a leading healthcare supply chain solutions provider, turned to MongooseIM to build a robust messaging platform that could handle real-time communication at scale. 

Read the full case study to learn how our collaboration transformed their operations with a secure and scalable solution.

Beekeeper

Beekeeper’s workforce communication platform needed a messaging backbone supporting diverse global industries. With MongooseIM, they achieved the scalability, security, and reliability their customers demanded. 

Discover their success story and see how MongooseIM helped Beekeeper thrive.

To Conclude

Thank you for exploring our MongooseIM round-up! We hope this guide provides valuable insights into the power of MongooseIM. If you’d like to discuss how MongooseIM can elevate your messaging infrastructure, don’t hesitate to get in touch.

Contact us today and start your journey toward scalable, future-proof messaging solutions.

The post MongooseIM Round-Up appeared first on Erlang Solutions.

by Erlang Solutions Team at January 16, 2025 16:17

ProcessOne

How Big Tech Pulled Off the Billion-User Heist

How Big Tech Pulled Off the Billion-User Heist

For many years, I have heard countless justifications for keeping messaging systems closed. Many of us have tried to rationalize walled gardens for various reasons:

  • Closed messaging systems supposedly enable faster progress, as there’s no need to collaborate on shared specifications or APIs. You can change course more easily.
  • Closed messaging systems are better for security, spam, or whatever other risks we imagine, because owners feel they have better control of what goes in and out.
  • Closed messaging systems are said to foster innovation by protecting the network owner’s investments.

But is any of this really true? Let’s take a step back and examine these claims.

A Brief History of Messaging Tools

Until the 1990s, messaging systems were primarily focused on building communities. The dominant protocol of the time was IRC (Internet Relay Chat). While IRC allowed private messaging, its main purpose was to facilitate large chatrooms where people with shared interests could hang out and interact.

In the 1990s, messaging evolved into a true communication tool, offering an alternative to phone calls. It enabled users to stay in touch with friends and family while forging new connections online. With the limitations of the dial-up era, where users weren’t always connected, asynchronous communication became the norm. Features like offline messages and presence indicators emerged, allowing users to see at a glance who was online, available, or busy.

The revolution began with ICQ, quickly followed by competitors like Yahoo! Messenger and MSN Messenger. However, this proliferation of platforms created a frustrating experience: your contacts were spread across different networks, requiring multiple accounts and clients. Multiprotocol clients like Meebo and Pidgin emerged, offering a unified interface for these networks. Still, they often relied on unofficial protocol implementations, which were unreliable and lacked key features compared to native clients.

To address these issues, a group of innovators in 1999 set out to design a better solution—an open instant messaging protocol that revolved around two fundamental principles:

  1. Federation: A federated protocol would allow users on any server to communicate seamlessly with users on other servers. This design was essential for scalability, as supporting billions of users on a single platform was unimaginable at the time.
  2. Gateway Support: The protocol would include gateways to existing networks, enabling users to connect with contacts on other platforms transparently, without needing to juggle multiple applications. The gateways were implemented on the server-side, allowing fast iterations on gateway code.

This initiative, originally branded as Jabber, gave rise to XMPP (Extensible Messaging and Presence Protocol), a protocol standardized by the IETF. XMPP gained traction, with support from several open-source servers and clients. Major players adopted the protocol—Google for Google Talk and Facebook for Facebook Messenger, enabling third-party XMPP clients to connect to their services. The future of open messaging looked promising.

Fast Forward 20 Years

Today, that optimism has faded. Few people know about XMPP or its newer counterpart, Matrix. Google’s messaging services have abandoned XMPP, Facebook has closed its XMPP gateways, and the landscape has returned to the fragmentation of the past. 

Instead of Yahoo! Messenger and MSN, we now deal with WhatsAppFacebook MessengerTelegramGoogle ChatSignal, and even messaging features within social networks like Instagram and LinkedIn. Our contacts are scattered across these platforms, forcing us to switch between apps just as we did in the 1990s.

What Went Wrong?

Many of these platforms initially adopted XMPP, including Google, Facebook, and even WhatsApp. However, their focus on growth led them to abandon federation. Requiring users to create platform-specific accounts became a key strategy for locking in users and driving their friends to join the same network. Federation, while technically advantageous, was seen as a barrier to user acquisition and growth. 

The Big Heist

The smartphone era marked a turning point in messaging, fueled by always-on connectivity and the rise of app stores. Previously, deploying an app at scale required agreements with mobile carriers to preload the app on the phones they sold. Carriers acted as gatekeepers, tightly controlling app distribution. However, the introduction of app stores and data plans changed everything. These innovations empowered developers to bypass carriers and build their own networks on top of carrier infrastructure—a phenomenon known as over-the-top (OTT) applications

Among these new apps was WhatsApp, which revolutionized messaging in several ways. Initially, WhatsApp relied on Apple’s Push Notification Service to deliver messages in real time, bypassing the need for a complex infrastructure at launch. Its true breakthrough, however, was the decision to use phone numbers as user identifiers—a bold move that set a significant precedent. At the time, most messaging platforms avoided this approach because phone numbers were closely tied to SMS, and validating them via SMS codes came with significant costs.

WhatsApp cleverly leveraged this existing, international system of telecommunication identifiers to bootstrap its proprietary network. By using phone numbers, it eliminated the need for users to create, manage and share separate accounts, simplifying onboarding. WhatsApp also capitalized on the high cost of SMS at the time. Since short messages were often not unlimited, and international SMS was especially expensive, many users found it cheaper to rely on data plans or Wi-Fi to message friends and family—particularly across borders.

When we launched our own messaging app, TextOne (now discontinued), we considered using phone numbers as identifiers but ultimately decided against it. Forcing users to disclose such personal information felt intrusive and misaligned with privacy principles. By then, the phone had shifted from being a shared household device to a deeply personal one, making phone numbers uniquely tied to individual identities. 

Later, Whatsapp launched its own infrastructure based on ejabberd, but they kept their service closed. At that time, we also considered using phone number when launching our own messaging app, the now discontinued TextOne, but refused to use that. It did not feel right, as you were forcing people to disclose an important private information. As the phone had become a personnal device, instead of a household device, the phone number played the role of unique identifier for a single individual.

Unfortunately, most major players seeking to scale their messaging platforms adopted the phone number as a universal identifier. WhatsApp’s early adoption of this strategy helped it rapidly amass a billion users, giving it a decisive first-mover advantage. However, it wasn’t the only player to recognize and exploit the power of phone numbers in building massive-scale networks. Today, the phone number is arguably the most accurate global identifier for individuals, serving as a cornerstone of the flourishing data economy.

What’s Wrong With Using Phone Numbers as IDs?

Phone numbers are a common good—a foundation of global communication. They rely on the principle of universal accessibility: you can reach anyone, anywhere in the world, regardless of their phone provider or location. This system was built on international cooperation, with a branch of the United Nations playing a key role in maintaining a provider-agnostic, interoperable platform. At its core is a globally unique phone numbering system, created through collaborative standards and protocols. 

However, over-the-top (OTT) companies have exploited this infrastructure to build private networks on top of the public system. They’ve leveraged the universal identification scheme of phone numbers—and, by extension, the global interoperable network—to construct proprietary, closed ecosystems. 

To me, this feels like a misuse of a common good. Phone numbers, produced through international cooperation, should not be appropriated freely by private corporations without accountability. While it may be too late to reverse this trend, we should consider a contribution system for companies that store and use phone numbers as identifiers. 

For example, companies that maintain databases with millions of unique phone numbers could be required to pay an annual fee for each phone number they store. This fee could be distributed to the countries associated with those numbers. Such a system would achieve two things: 

  1. Encourage Accountability: Companies would need to evaluate whether collecting and storing phone numbers is truly essential for their business. If the data isn’t valuable enough to justify the cost, they might choose not to collect it.
  2. Promote Fairness: For companies that rely heavily on phone numbers to track, match, and build private, non-interoperable services, this fee would act as a fair contribution, akin to taxes paid for using public road infrastructure. 

It looks a lot to me that the phone number is a common good produced and use by international cooperation. It is too late to prevent it to be used by Big Tech companies. However, it may seem fair to imagine a contribution from company storing phone number. This is a data that is not their property and not theirs to use. Shouldn&apost we consider a tax on phone numbers storage and usage ? For example, if a company store a millions unique phone number in their database, why not require a yearly fee, to be paid to each country that any phone number is associated to, one yearly fee per phone number ?

Company would have to think twice about storing such personnal data. Is it valuable for your business ? If it is not valuable enough, fair enough, delete them and do not ask them, but if you need it to trakt and match user and build a private non interoperable service, then paying a fair contribution for their usage should be considered. It would be like the tax they pay to leverage road infrastructure in countries where they operate.

Beyond Taxes: The Push for Interoperability

Of course, a contribution system alone won’t solve the larger issue. We also need a significant push toward interoperable and federated messaging. While the European Digital Markets Act (DMA) includes an interoperability requirement, it doesn’t go far enough. Interoperability alone cannot address the challenges of closed ecosystems.

I’ll delve deeper into why interoperability must be paired with federation in a future article, as this is a critical piece of the puzzle.

Interoperability vs. Velocity

To conclude, I’d like to reference the introduction of the IETF SPIN draft, which perfectly encapsulates the trade-offs between interoperability and innovation:

Voice, video and messaging today is commonplace on the Internet, enabled by two distinct classes of software. The first are those provided by telecommunications carriers that make heavy use of standards, such as the Session Initiation Protocol (SIP) [RFC3261]. In this approach - which we call the telco model - there is interoperability between different telcos, but the set of features and functionality is limited by the rate of definition and adoption of standards, often measured in years or decades. The second model - the app model - allows a single entity to offer an application, delivering both the server side software and its corresponding client-side software. The client-side software is delivered either as a web application, or as a mobile application through a mobile operating system app store. The app model has proven incredibly successful by any measure. It trades off interoperability for innovation and velocity.

The downside of the loss of interoperability is that entry into the market place by new providers is difficult. Applications like WhatsApp, Facebook Messenger, and Facetime, have user bases numbering in the hundreds of millions to billions of users. Any new application cannot connect with these user bases, requiring the vendor of the new app to bootstrap its own network effects.

This summary aligns closely with the ideas I’ve explored in this article. 

I believe we’ve reached a point where we need interoperability far more than continued innovation in voice, video, and messaging. While innovation in these areas has been remarkable, we have perhaps been too eager—or too blind—to sacrifice interoperability in the name of progress. 

Now, the pendulum is poised to swing back. Centralization must give way to federation if we are to maintain the universality that once defined global communication. Without federation, there can be no true global and universal service, and without universality, we risk regressing, fragmenting all our communication systems into isolated and proprietary silos. 

It’s time to prioritize interoperability, to reclaim the vision of a truly connected world where communication is open, accessible, and universal.

by Mickaël Rémond at January 16, 2025 16:14

January 15, 2025

ProcessOne

Fluux multiple Subscriptions/Services

Fluux is our ejabberd Business Edition cloud service. With a subscription, we deploy, manage, update and scale an instance of our most scalable messaging server. Up to now, if you wanted to deploy several services, you had to create another account with a different email. Starting today, you can manage and pay for different servers from a single Fluux account.

Here is how to use that feature. On Fluux dashboard main page after the list of your service/platforms you may have noticed a "New" button.

alt

You will be then redirected on a page to choose your plan.

alt

Once terms and conditions are approved, you will be able to fill your card information on a page hosted by our payment provider.

alt

When payment is succeeded, you will be then redirected to Fluux console and a link create your service:

alt

On this last page you will be able to provide a technical name that will be used to provision your Fluux service.

alt

After 10 minutes you can enjoy your new service at techname.m.in-app.io (such test1.m.in-app.io in above screenshot)

by Sébastien Luquet at January 15, 2025 16:27

January 14, 2025

Ignite Realtime Blog

XMPP Summit #27 and FOSDEM 2025

The XMPP Standards Foundation’s yearly Summit will be held on January 30 and 31st, in Brussels. The Summit is an annual two-day gathering where we discuss XMPP protocol development topics. It is a place for XMPP developers to meet each other, and make progress on current issues within the protocol and ecosystem.

Immediately following the Summit is FOSDEM. FOSDEM is a free event for software developers to meet, share ideas and collaborate. Every year, thousands of developers of free and open source software from all over the world gather at the event in Brussels.

I will be present at the Summit, and a small army of Ignite community members (including myself) will be present at FOSDEM We hope to see you at either event! If you’re around, come say hi!

For other release announcements and news follow us on Mastodon or X

1 post - 1 participant

Read full topic

by guus at January 14, 2025 10:17

January 13, 2025

Erlang Solutions

The BEAM-Erlang’s virtual machine

Welcome to the first chapter of the “Elixir, 7 Steps to Start Your Journey” series. In my previous post, I discussed my journey with the programming language.

In this chapter, we will discuss the Erlang Virtual Machine, the BEAM.

To understand why the Elixir programming language is so powerful and reliable, we must understand its foundations, which means talking about Erlang. 

Elixir runs on the Erlang Virtual Machine and inherits many of its virtues. In this post, you will learn a little about the history of Erlang, the objective with which it was initially created, and why it is fundamental for Elixir.

What is Erlang?

Erlang as a programming language

Erlang is a programming language created in the mid-1980s by Joe Armstrong, Robert Virding, and Mike Williams at the Ericsson Computer Science Laboratory. Initially designed for telecommunications, it is now a general-purpose language. It was influenced by other programming languages, such as ML and Prolog, and was released as open-source in 1998.

Erlang was designed with distributed, fault-tolerant, massively concurrent, and soft real-time systems in mind, making it an excellent choice for today’s systems. Most are looking for these features, in addition to having confidence in Erlang’s history in productive systems.

Some of the characteristics of this programming language are:

  • It is a declarative language, which means it is based on the principle of describing what should be calculated instead of how
  • Pattern matching is possible at a high level and also on bit sequences.
  • Functions in Erlang are first-class data.

Erlang as the development ecosystem 

Up to this point, we have referred to Erlang as the programming language; however, it should be noted that Erlang can also refer to an entire development ecosystem that is made up of:

  • The Erlang programming language
  • The framework OTP
  • A series of tools and
  • The virtual machine, BEAM

Erlang, as an ecosystem, was explicitly created to support highly available systems, which provide service even when errors or unexpected circumstances occur, and this is due to many of the characteristics of its virtual machine (VM).

So, although Erlang as a programming language is pretty cool on its own, the real magic happens when all the ecosystem elements are combined: the programming language, libraries, OTP, and the virtual machine.

Erlang's virtual machine, the BEAM OTP

If you want to know more about the history of Erlang, the list of resources below will be very helpful.

Resources

Erlang Virtual Machine, BEAM

The Erlang Virtual Machine, known as the BEAM, runs as an operating system process and is responsible for executing the Erlang code. It is also responsible for creating, scheduling, and managing Erlang processes, which are the fundamental basis of concurrency. 

Thanks to the BEAM schedulers, these processes can be executed in the most efficient way possible, allowing the system to be highly scalable. The processes do not share memory; they communicate through asynchronous message passing. This mechanism is the foundation for a system’s fault tolerance. As they are entirely isolated, the other system processes will not be affected if an error occurs in one of them.

The BEAM is also responsible for parallelizing your concurrent Erlang programs, making the most of a machine’s resources. Initially, the virtual machine model was a single-run queue. However, it evolved into a run queue for each available processor, ensuring no bottlenecks and that Erlang programs work correctly on any system, regardless of the number of machine cores.

Erlang Virtual Machine multicore

Another characteristic is that storage management is automated. Garbage collection is implemented per process, which allows a system’s response time to always remain in the order of milliseconds without performance degradation.

And lastly, one of my favourite features is error detection. The virtual machine provides all the elements necessary for efficient error detection and handling, thus promoting an always-available system regardless of failures.

In summary, the BEAM is responsible for the scalability, distribution, and responsiveness of a system:

  • Manages the concurrency of it.
  • It has a mechanism for error detection and handling.
  • Make the most of the computer’s resources.

 If you’d like to learn more about the duo that is Erlang and Elixir, check out the “What is Elixir” post.

Elixir in the BEAM

Like Erlang, Elixir was also influenced by other programming languages, including Erlang itself. Its code runs on the Erlang Virtual Machine, which means it takes advantage of all its features and can use all the Erlang libraries and the OTP framework.

Different programming languages ​​besides Elixir and Erlang run in the BEAM, but Elixir has ensured that the approach between BEAM and programmers is fluid and quickly understandable.

Elixir code is compiled into bytecode that runs in the BEAM and is more compact than Erlang code. Its syntax is similar to how we communicate daily, allowing for early familiarization with the language, even if it is the first time you program with it. It also reduces the boilerplate and has amazing documentation.

So, when writing code with Elixir, we have the best of both: a solid and battle-tested foundation that allows us to create fail-safe systems and, on the other hand, nice syntax, well-defined patterns, and code simplification, among other things. Thanks to this, Elixir has been so well accepted and has rapidly gained popularity.

Elixir is a cool programming language that allows you to write code that is easy to understand and maintain and takes advantage of the Erlang concurrency model, which we will discuss in the next chapter.

> iex


iex(1)> list = [4,5,21,1,38]


iex(2)> erlang_example = :lists.sort(list);
[1, 4, 5, 21, 38]


iex(3)> elixir_example = Enum.sort(list)
[1, 4, 5, 21, 38]

Example of how you can run Erlang and Elixir code in an interactive Elixir shell

Next chapter

In the next post, “Understanding Processes and Concurrency,” we will discuss how Erlang processes work and their importance in developing robust and scalable systems. We will also see how concurrency works in Erlang and how this relates to Elixir. Do not miss it! You can drop the team a message if you’d like to discuss Elixir in more detail.

The post The BEAM-Erlang’s virtual machine appeared first on Erlang Solutions.

by Lorena Mireles at January 13, 2025 18:34

Erlang’s virtual machine, the BEAM

Welcome to the first chapter of the “Elixir, 7 Steps to Start Your Journey” series. In my previous post, I discussed my journey with the programming language.

In this chapter, we will discuss the Erlang Virtual Machine, the BEAM.

To understand why the Elixir programming language is so powerful and reliable, we must understand its foundations, which means talking about Erlang. 

Elixir runs on the Erlang Virtual Machine and inherits many of its virtues. In this post, you will learn a little about the history of Erlang, the objective with which it was initially created, and why it is fundamental for Elixir.

What is Erlang?

Erlang as a programming language

Erlang is a programming language created in the mid-1980s by Joe Armstrong, Robert Virding, and Mike Williams at the Ericsson Computer Science Laboratory. Initially designed for telecommunications, it is now a general-purpose language. It was influenced by other programming languages, such as ML and Prolog, and was released as open-source in 1998.

Erlang was designed with distributed, fault-tolerant, massively concurrent, and soft real-time systems in mind, making it an excellent choice for today’s systems. Most are looking for these features, in addition to having confidence in Erlang’s history in productive systems.

Some of the characteristics of this programming language are:

  • It is a declarative language, which means it is based on the principle of describing what should be calculated instead of how
  • Pattern matching is possible at a high level and also on bit sequences.
  • Functions in Erlang are first-class data.

Erlang as the development ecosystem 

Up to this point, we have referred to Erlang as the programming language; however, it should be noted that Erlang can also refer to an entire development ecosystem that is made up of:

  • The Erlang programming language
  • The framework OTP
  • A series of tools and
  • The virtual machine, BEAM

Erlang, as an ecosystem, was explicitly created to support highly available systems, which provide service even when errors or unexpected circumstances occur, and this is due to many of the characteristics of its virtual machine (VM).

So, although Erlang as a programming language is pretty cool on its own, the real magic happens when all the ecosystem elements are combined: the programming language, libraries, OTP, and the virtual machine.

Erlang's virtual machine, the BEAM OTP

If you want to know more about the history of Erlang, the list of resources below will be very helpful.

Resources

Erlang Virtual Machine, BEAM

The Erlang Virtual Machine, known as the BEAM, runs as an operating system process and is responsible for executing the Erlang code. It is also responsible for creating, scheduling, and managing Erlang processes, which are the fundamental basis of concurrency. 

Thanks to the BEAM schedulers, these processes can be executed in the most efficient way possible, allowing the system to be highly scalable. The processes do not share memory; they communicate through asynchronous message passing. This mechanism is the foundation for a system’s fault tolerance. As they are entirely isolated, the other system processes will not be affected if an error occurs in one of them.

The BEAM is also responsible for parallelizing your concurrent Erlang programs, making the most of a machine’s resources. Initially, the virtual machine model was a single-run queue. However, it evolved into a run queue for each available processor, ensuring no bottlenecks and that Erlang programs work correctly on any system, regardless of the number of machine cores.

Erlang Virtual Machine multicore

Another characteristic is that storage management is automated. Garbage collection is implemented per process, which allows a system’s response time to always remain in the order of milliseconds without performance degradation.

And lastly, one of my favourite features is error detection. The virtual machine provides all the elements necessary for efficient error detection and handling, thus promoting an always-available system regardless of failures.

In summary, the BEAM is responsible for the scalability, distribution, and responsiveness of a system:

  • Manages the concurrency of it.
  • It has a mechanism for error detection and handling.
  • Make the most of the computer’s resources.

 If you’d like to learn more about the duo that is Erlang and Elixir, check out the “What is Elixir” post.

Elixir in the BEAM

Like Erlang, Elixir was also influenced by other programming languages, including Erlang itself. Its code runs on the Erlang Virtual Machine, which means it takes advantage of all its features and can use all the Erlang libraries and the OTP framework.

Different programming languages ​​besides Elixir and Erlang run in the BEAM, but Elixir has ensured that the approach between BEAM and programmers is fluid and quickly understandable.

Elixir code is compiled into bytecode that runs in the BEAM and is more compact than Erlang code. Its syntax is similar to how we communicate daily, allowing for early familiarization with the language, even if it is the first time you program with it. It also reduces the boilerplate and has amazing documentation.

So, when writing code with Elixir, we have the best of both: a solid and battle-tested foundation that allows us to create fail-safe systems and, on the other hand, nice syntax, well-defined patterns, and code simplification, among other things. Thanks to this, Elixir has been so well accepted and has rapidly gained popularity.

Elixir is a cool programming language that allows you to write code that is easy to understand and maintain and takes advantage of the Erlang concurrency model, which we will discuss in the next chapter.

> iex


iex(1)> list = [4,5,21,1,38]


iex(2)> erlang_example = :lists.sort(list);
[1, 4, 5, 21, 38]


iex(3)> elixir_example = Enum.sort(list)
[1, 4, 5, 21, 38]

Example of how you can run Erlang and Elixir code in an interactive Elixir shell

Next chapter

In the next post, “Understanding Processes and Concurrency,” we will discuss how Erlang processes work and their importance in developing robust and scalable systems. We will also see how concurrency works in Erlang and how this relates to Elixir. Do not miss it! You can drop the team a message if you’d like to discuss Elixir in more detail.

The post Erlang’s virtual machine, the BEAM appeared first on Erlang Solutions.

by Lorena Mireles at January 13, 2025 12:31

January 11, 2025

Mathieu Pasquet

slixmpp v1.8.6 - Codename ICE 2926

A bit less than a year since the latest version, there has been around 45 commits to include in this new release.

New things

Added initial support for:

Improved

  • Better XEP-0199 (XMPP Ping) support for components
  • Better support for XEP-0313 (Message Archive Management): flipped pages, better date parsing)
  • XEP-0424 (Message Retraction): Update plugin to the latest XEP version
  • XEP-0425 (Message Moderation): Update plugin to the latest XEP version
  • XEP-0461 (Message Replies): add fallback support and fix off-by-one error
  • Connectivity: add a "stanza_not_sent" event when a stanza is added to the queue instead of being sent
  • Improved type hints
  • Documentation improvements
  • Better debugging message when crafting stanzas manually

Fixed

  • Fix a bug in XEP-0231 (Bits of Binary) with non-existing BoBs
  • XEP-0045 (Multi User Chat): set a default timeout for join_muc_wait
  • Stability: prevent stanza formatting from bringing down the stream
  • aiodns is no longer installed on windows
  • Fixed issues when executing tests
  • vendored imghdr module for the examples (removed in python 3.13)

Closing note

Thanks to all contrbibutors for this release, notably nicoco for tirelessy adding and updating XEP features for slidge (and waiting a long time for their merge), jinyu for taking up the task of making XEP-0045 actually usable, and sch for updating the documentation.

You can find the new release on codeberg.

P.S. : this release is named after the lovely train taking me to CCC which is very late and I don’t actually understand german.

by mathieui at January 11, 2025 18:00

January 10, 2025

The XMPP Standards Foundation

XMPP at FOSDEM 2025

We’re very excited to be back at FOSDEM in person in 2025. Once again, many members of the XSF and the XMPP community will be attending, and we hope to see you there!

Realtime Lounge

As usual, we will host the Realtime Lounge, where you can come and meet community members, project developers, see demos and ask us questions. We’ll be in our traditional location - find us on the K building 2nd floor, beside the elevator (map below). Come and say Hi! Yes, we got stickers :-)

Map of the K building level 2

Map of the K building level 2

Talks

There will be a presentation in the Real Time Communications (RTC) track:

  • Jérôme Poisson (Goffi):
    • A Universal and Stable API to Everything: XMPP: “Nowadays, most services provide APIs with their own formats, and sometimes multiple versions, which may change over time. But there is a universal API, with an excellent track record of stability and backward compatibility: XMPP!. In this talk, I’ll show how XMPP can be more than just an Instant Messaging protocol, and how it can be an extremely powerful tool to access almost anything, from third-party networks (IM, microblogging, etc.) to file sharing, automation (IoT), and more. “. The presentation will take place on Saturday, February 1st 2025, on the Real Time Communications (RTC) track, room K.3.601, starting at 18:25 and ending at 18:40 hs.

XMPP Summit 27

Prior to FOSDEM, the XSF will also hold its 27th XMPP summit. This is where community members gather to discuss protocol changes and exchange within the developer community. We’ll be reporting live from the event and also from FOSDEM.

Spread the word

Please share the news on other networks:

January 10, 2025 00:00

January 09, 2025

Erlang Solutions

Erlang’s virtual machine, the BEAM

Welcome to the first chapter of the “Elixir, 7 Steps to Start Your Journey” series. In my previous post, I discussed my personal journey with the programming language.

In this chapter, we will discuss the Erlang Virtual Machine, the BEAM.

To understand why the Elixir programming language is so powerful and reliable, we must understand its foundations, which means talking about Erlang. 

Elixir runs on the Erlang Virtual Machine and inherits many of its virtues. In this post, you will learn a little about the history of Erlang, the objective with which it was initially created, and why it is fundamental for Elixir.

What is Erlang?

Erlang as a programming language

Erlang is a programming language created in the mid-1980s by Joe Armstrong, Robert Virding, and Mike Williams at the Ericsson Computer Science Laboratory. Initially designed for telecommunications, it is now a general-purpose language. It was influenced by other programming languages, such as ML and Prolog, and was released as open-source in 1998.

Erlang was designed with distributed, fault-tolerant, massively concurrent, and soft real-time systems in mind, making it an excellent choice for today’s systems. Most are looking for these features, in addition to having confidence in Erlang’s history in productive systems.

Some of the characteristics of this programming language are:

  • It is a declarative language, which means it is based on the principle of describing what should be calculated instead of how
  • Pattern matching is possible at a high level and also on bit sequences.
  • Functions in Erlang are first-class data.

Erlang as the development ecosystem 

Up to this point, we have referred to Erlang as the programming language; however, it should be noted that Erlang can also refer to an entire development ecosystem that is made up of:

  • The Erlang programming language
  • The framework OTP
  • A series of tools and
  • The virtual machine, BEAM

Erlang, as an ecosystem, was explicitly created to support highly available systems, which provide service even when errors or unexpected circumstances occur, and this is due to many of the characteristics of its virtual machine (VM).

So, although Erlang as a programming language is pretty cool on its own, the real magic happens when all the ecosystem elements are combined: the programming language, libraries, OTP, and the virtual machine.

Erlang's virtual machine, the BEAM OTP

If you want to know more about the history of Erlang, the list of resources below will be very helpful.

Resources

Erlang Virtual Machine, BEAM

The Erlang Virtual Machine, known as the BEAM, runs as an operating system process and is responsible for executing the Erlang code. It is also responsible for creating, scheduling, and managing Erlang processes, which are the fundamental basis of concurrency. 

Thanks to the BEAM schedulers, these processes can be executed in the most efficient way possible, allowing the system to be highly scalable. The processes do not share memory; they communicate through asynchronous message passing. This mechanism is the foundation for a system’s fault tolerance. As they are entirely isolated, the other system processes will not be affected if an error occurs in one of them.

The BEAM is also responsible for parallelizing your concurrent Erlang programs, making the most of a machine’s resources. Initially, the virtual machine model was a single-run queue. However, it evolved into a run queue for each available processor, ensuring no bottlenecks and that Erlang programs work correctly on any system, regardless of the number of machine cores.

Erlang Virtual Machine multicore

Another characteristic is that storage management is automated. Garbage collection is implemented per process, which allows a system’s response time to always remain in the order of milliseconds without performance degradation.

And lastly, one of my favourite features is error detection. The virtual machine provides all the elements necessary for efficient error detection and handling, thus promoting an always-available system regardless of failures.

In summary, the BEAM is responsible for the scalability, distribution, and responsiveness of a system:

  • Manages the concurrency of it.
  • It has a mechanism for error detection and handling.
  • Make the most of the computer’s resources.

 If you’d like to learn more about the duo that is Erlang and Elixir, check out the “What is Elixir” post.

Elixir in the BEAM

Like Erlang, Elixir was also influenced by other programming languages, including Erlang itself. Its code runs on the Erlang Virtual Machine, which means it takes advantage of all its features and can use all the Erlang libraries and the OTP framework.

Different programming languages ​​besides Elixir and Erlang run in the BEAM, but Elixir has ensured that the approach between BEAM and programmers is fluid and quickly understandable.

Elixir code is compiled into bytecode that runs in the BEAM and is more compact than Erlang code. Its syntax is similar to how we communicate daily, allowing for early familiarization with the language, even if it is the first time you program with it. It also reduces the boilerplate and has amazing documentation.

So, when writing code with Elixir, we have the best of both: a solid and battle-tested foundation that allows us to create fail-safe systems and, on the other hand, nice syntax, well-defined patterns, and code simplification, among other things. Thanks to this, Elixir has been so well accepted and has rapidly gained popularity.

Elixir is a cool programming language that allows you to write code that is easy to understand and maintain and takes advantage of the Erlang concurrency model, which we will discuss in the next chapter.

> iex


iex(1)> list = [4,5,21,1,38]


iex(2)> erlang_example = :lists.sort(list);
[1, 4, 5, 21, 38]


iex(3)> elixir_example = Enum.sort(list)
[1, 4, 5, 21, 38]

Example of how you can run Erlang and Elixir code in an interactive Elixir shell

Next chapter

In the next post, “Understanding Processes and Concurrency,” we will discuss how Erlang processes work and their importance in developing robust and scalable systems. We will also see how concurrency works in Erlang and how this relates to Elixir. Do not miss it! You can drop the team a message if you’d like to discuss Elixir in more detail.

The post Erlang’s virtual machine, the BEAM appeared first on Erlang Solutions.

by Lorena Mireles at January 09, 2025 11:57

Erlang’s virtual machine, the BEAM

Welcome to the first chapter of the “Elixir, 7 Steps to Start Your Journey” series. In my previous post, I discussed my personal journey with the programming language.

In this chapter, we will discuss the Erlang Virtual Machine, the BEAM.

To understand why the Elixir programming language is so powerful and reliable, we must understand its foundations, which means talking about Erlang. 

Elixir runs on the Erlang Virtual Machine and inherits many of its virtues. In this post, you will learn a little about the history of Erlang, the objective with which it was initially created, and why it is fundamental for Elixir.

What is Erlang?

Erlang as a programming language

Erlang is a programming language created in the mid-1980s by Joe Armstrong, Robert Virding, and Mike Williams at the Ericsson Computer Science Laboratory. Initially designed for telecommunications, it is now a general-purpose language. It was influenced by other programming languages, such as ML and Prolog, and was released as open-source in 1998.

Erlang was designed with distributed, fault-tolerant, massively concurrent, and soft real-time systems in mind, making it an excellent choice for today’s systems. Most are looking for these features, in addition to having confidence in Erlang’s history in productive systems.

Some of the characteristics of this programming language are:

  • It is a declarative language, which means it is based on the principle of describing what should be calculated instead of how
  • Pattern matching is possible at a high level and also on bit sequences.
  • Functions in Erlang are first-class data.

Erlang as the development ecosystem 

Up to this point, we have referred to Erlang as the programming language; however, it should be noted that Erlang can also refer to an entire development ecosystem that is made up of:

  • The Erlang programming language
  • The framework OTP
  • A series of tools and
  • The virtual machine, BEAM

Erlang, as an ecosystem, was explicitly created to support highly available systems, which provide service even when errors or unexpected circumstances occur, and this is due to many of the characteristics of its virtual machine (VM).

So, although Erlang as a programming language is pretty cool on its own, the real magic happens when all the ecosystem elements are combined: the programming language, libraries, OTP, and the virtual machine.

Erlang's virtual machine, the BEAM OTP

If you want to know more about the history of Erlang, the list of resources below will be very helpful.

Resources

Erlang Virtual Machine, BEAM

The Erlang Virtual Machine, known as the BEAM, runs as an operating system process and is responsible for executing the Erlang code. It is also responsible for creating, scheduling, and managing Erlang processes, which are the fundamental basis of concurrency. 

Thanks to the BEAM schedulers, these processes can be executed in the most efficient way possible, allowing the system to be highly scalable. The processes do not share memory; they communicate through asynchronous message passing. This mechanism is the foundation for a system’s fault tolerance. As they are entirely isolated, the other system processes will not be affected if an error occurs in one of them.

The BEAM is also responsible for parallelizing your concurrent Erlang programs, making the most of a machine’s resources. Initially, the virtual machine model was a single-run queue. However, it evolved into a run queue for each available processor, ensuring no bottlenecks and that Erlang programs work correctly on any system, regardless of the number of machine cores.

Erlang Virtual Machine multicore

Another characteristic is that storage management is automated. Garbage collection is implemented per process, which allows a system’s response time to always remain in the order of milliseconds without performance degradation.

And lastly, one of my favourite features is error detection. The virtual machine provides all the elements necessary for efficient error detection and handling, thus promoting an always-available system regardless of failures.

In summary, the BEAM is responsible for the scalability, distribution, and responsiveness of a system:

  • Manages the concurrency of it.
  • It has a mechanism for error detection and handling.
  • Make the most of the computer’s resources.

 If you’d like to learn more about the duo that is Erlang and Elixir, check out the “What is Elixir” post.

Elixir in the BEAM

Like Erlang, Elixir was also influenced by other programming languages, including Erlang itself. Its code runs on the Erlang Virtual Machine, which means it takes advantage of all its features and can use all the Erlang libraries and the OTP framework.

Different programming languages ​​besides Elixir and Erlang run in the BEAM, but Elixir has ensured that the approach between BEAM and programmers is fluid and quickly understandable.

Elixir code is compiled into bytecode that runs in the BEAM and is more compact than Erlang code. Its syntax is similar to how we communicate daily, allowing for early familiarization with the language, even if it is the first time you program with it. It also reduces the boilerplate and has amazing documentation.

So, when writing code with Elixir, we have the best of both: a solid and battle-tested foundation that allows us to create fail-safe systems and, on the other hand, nice syntax, well-defined patterns, and code simplification, among other things. Thanks to this, Elixir has been so well accepted and has rapidly gained popularity.

Elixir is a cool programming language that allows you to write code that is easy to understand and maintain and takes advantage of the Erlang concurrency model, which we will discuss in the next chapter.

> iex


iex(1)> list = [4,5,21,1,38]


iex(2)> erlang_example = :lists.sort(list);
[1, 4, 5, 21, 38]


iex(3)> elixir_example = Enum.sort(list)
[1, 4, 5, 21, 38]

Example of how you can run Erlang and Elixir code in an interactive Elixir shell

Next chapter

In the next post, “Understanding Processes and Concurrency,” we will discuss how Erlang processes work and their importance in developing robust and scalable systems. We will also see how concurrency works in Erlang and how this relates to Elixir. Do not miss it! You can drop the team a message if you’d like to discuss Elixir in more detail.

The post Erlang’s virtual machine, the BEAM appeared first on Erlang Solutions.

by Lorena Mireles at January 09, 2025 10:30

January 05, 2025

The XMPP Standards Foundation

The XMPP Newsletter December 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of December 2024.

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

XSF Announcements

XSF Membership

If you are interested in joining the XMPP Standards Foundation as a member, please apply until February 16th, 2025, 00:00 UTC!.

XMPP Summit 27 & FOSDEM 2025

The XSF is planning the XMPP Summit 27, which is to take place on January 30th & 31st 2025 in Brussels (Belgium, Europe). Following the Summit, the XSF is also planning to be present at FOSDEM 2025, which takes place on February 1st & 2nd 2025. Find all the details in our Wiki. Please sign-up now if you are planning to attend, since this helps organizing. The event is of course open for everyone interested to participate. Spread the word within your circles!

XMPP at FOSDEM 2025

  • Jérôme Poisson (Goffi) presentation at FOSDEM 2025:
    • A Universal and Stable API to Everything: XMPP: “Nowadays, most services provide APIs with their own formats, and sometimes multiple versions, which may change over time. But there is a universal API, with an excellent track record of stability and backward compatibility: XMPP!. In this talk, I’ll show how XMPP can be more than just an Instant Messaging protocol, and how it can be an extremely powerful tool to access almost anything, from third-party networks (IM, microblogging, etc.) to file sharing, automation (IoT), and more. “. The presentation will take place on Saturday, February 1st 2025, on the Real Time Communications (RTC) track, room K.3.601, starting at 18:25 and ending at 18:40 hs.

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

Talks

XMPP Articles

XMPP Software News

XMPP Clients and Applications

Kaidan IM 0.10 horizontal view screenshot

Kaidan IM 0.10 horizontal view screenshot

XMPP Servers

  • ProcessOne announces ejabberd 24.12: The “evacuate_kindly” release: Including a few improvements and bug fixes, this release comes a month and half after version 24.10, with around 60 commits to the core repository alongside a few updates in dependencies.
  • Prosody IM is pleased to announce the release of version 0.12.5, a new minor release of the 0.12 stable branch. As usual, you can consult the changelog for this release, and the download instructions for many platforms on their download page.

XMPP Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs proposed this month.

New

  • Version 0.1.0 of XEP-0501 (Pubsub Stories).
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0502 (MUC Activity Indicator).
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 0.2.0 of XEP-0480 (SASL Upgrade Tasks).
    • Fix SCRAM upgrade description and XML schema. (tm)
  • Version 0.1.1 of XEP-0500 (MUC Slow Mode).
    • Include first feedbacks (jl)
  • Version 0.2.0 of XEP-0501 (Pubsub Stories).
    • Add pubsub#item_expire in the node configuration (tj)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • Last Call for comments on XEP-0421 (Anonymous unique occupant identifiers for MUCs).
    • This Last Call shall end at the close of business on 2025-01-06
  • Last Call for comments on XEP-0424 (Message Retraction).
    • This Last Call shall end at the close of business on 2025-01-06

Stable

  • No XEPs moved to Stable this month.

Deprecated

  • No XEPs deprecated this month.

Rejected

  • No XEPs rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers and more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Schimon Zachary, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: Gonzalo Raúl Nemmi
  • German: xmpp.org
    • Translators: Millesimus

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF GitHub repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

To unsubscribe from this list, please log in first. If you have not previously logged in, you may need to set up an account with the appropriate email address.

License

This newsletter is published under CC BY-SA license.

January 05, 2025 00:00

December 31, 2024

Prosodical Thoughts

Prosody 0.12.5 released

We are pleased to announce a new minor release from our stable branch.

Hope everyone has had a good 2024, and you’re looking forward to a better 2025!

We’re ending this year with a bugfix release for our stable 0.12 branch. This brings some general polish and a collection of fixes for various small issues people have reported in the past months.

A notable behaviour change in this release is that Prosody will no longer send delivery errors to people you have blocked. Instead it will now just silently discard messages from the blocked JID, to avoid informing them that they have been blocked - which tends to be the preference of people we have spoken with, as well as the behaviour of many other online platforms. Obviously there are trade-offs here, so the behaviour is now configurable (see the mod_blocklist documentation).

This will be among the last releases from the 0.12 branch, as we are preparing a new major release with lots of new features. Stay tuned, and happy new year!

A summary of changes in this release:

Fixes and improvements

  • mod_blocklist: Drop blocked messages without error, option to restore compliant behavior

Minor changes

  • core.certmanager: Validate that ‘tls_profile’ is one of the valid values
  • net.http: Throw error if missing TLS context for HTTPS request
  • net.http.parser: Reject overlarge header section earlier
  • net.http.files: Validate argument to setup function
  • MUC: optimizations for broadcast of visitor presence (thanks Jitsi team)
  • net.server_event: Add ‘wrapserver’ API
  • scansion: Enable blocklist compat during tests to fix CI
  • prosodyctl check: Warn about invalid domain names in the config file
  • util.prosodyctl.check: Correct modern replacement for ‘disallow_s2s’
  • util.prosodyctl.cert: Ensure old cert is moved out of the way
  • util.prosodyctl.check: Improve error handling of UDP socket setup (for #1803)
  • mod_smacks: Destroy timed out session in async context (fixes #1884: ASYNC-01 in mod_smacks hibernation timeout)
  • mod_invites: Fix traceback when token_info isn’t set
  • mod_admin_shell: Allow matching on host or bare JID in c2s:show
  • mod_admin_adhoc: Fix log messages for reloading modules.
  • core.moduleapi: Default labels to empty list to fix error if omitted
  • mod_muc_mam: Improve wording of enable setting
  • mod_bookmarks: Suppress error publishing empty legacy bookmarks w/ no PEP node
  • mod_bookmarks: Clarify log messages on failure to sync to modern PEP bookmarks
  • mod_invites_adhoc: Fix result form type (thanks betarays)
  • mod_disco: Advertise disco#info and #items on bare JIDs to fix #1664: mod_disco on account doesn’t return disco#info feature
  • util.xtemplate: Fix error on applying each() to zero stanzas

Download

As usual, download instructions for many platforms can be found on our download page

If you have any questions, comments or other issues with this release, let us know!

by The Prosody Team at December 31, 2024 16:54

December 19, 2024

ProcessOne

ejabberd 24.12

ejabberd 24.12

Here comes ejabberd 24.12, including a few improvements and bug fixes. This release comes a month and half after 24.10, with around 60 commits to the core repository alongside a few updates in dependencies.

Release Highlights:

Among them, the evacuate_kindly command is a new tool which gave the funny codename to this release. It lets you stop and rerun ejabberd without letting users reconnect to let you perform your maintenance task peacefully. So, this is not an emergency exit from ejabberd, but instead testimony that this releasing is paving the way for a lot of new cool stuff in 2025.

In the meantime, we wish you a Merry Christmas and a Happy New Year!

Other contents:

If you are upgrading from a previous version, there are no required changes in the SQL schemas, configuration or hooks. There are some Commands API v3.

Below is a detailed breakdown of the improvements and enhancements:

XEP-0484: Fast Authentication Streamlining Tokens

We added support for XEP-0484: Fast Authentication Streamlining Tokens. This allows clients to request time limited tokens from servers, which then can be later used for faster authentication by requiring less round trips. To enable this feature, you need to add mod_auth_fast module in modules section.

Deprecation schedule for Erlang/OTP older than 25.0

It is expected that around April 2025, GitHub Actions will remove Ubuntu 20 and it will not be possible to run automatically dynamic tests for ejabberd using Erlang/OTP older than 25.0.

For that reason, the planned schedule is:

  • ejabberd 24.12

    • Usage of Erlang/OTP older than 25.0 is still supported, but discouraged
    • Anybody still using Erlang 24.3 down to 20.0 is encouraged to upgrade to a newer version. Erlang/OTP 25.0 and higher are supported. For instance, Erlang/OTP 26.3 is used for the binary installers and container images.
  • ejabberd 25.01 (or later)

    • Support for Erlang/OTP older than 25.0 is deprecated
    • Erlang requirement softly increased in configure.ac
    • Announce: no warranty ejabberd can compile, start or pass the Common Tests suite using Erlang/OTP older than 25.0
    • Provide instructions for anybody to manually re-enable it and run the tests
  • ejabberd 25.01+1 (or later)

    • Support for Erlang/OTP older than 25.0 is removed completely in the source code

Commands API v3

This ejabberd 24.12 release introduces ejabberd Commands API v3 because some commands have changed arguments and result formatting. You can continue using API v2; or you can update your API client to use API v3. Check the API Versions History.

Some commands that accepted accounts or rooms as arguments, or returned JIDs, have changed their arguments and results names and format to be consistent with the other commands:

  • Arguments that refer to a user account are now named user and host
  • Arguments that refer to a MUC room are now named room and service
  • As seen, each argument is now only the local or server part, not the JID
  • On the other hand, results that refer to user account or MUC room are now the JID

In practice, the commands that change in API v3 are:

If you want to update ejabberd to 24.12, but prefer to continue using an old API version with mod_http_api, you can set this new option:

modules:
  mod_http_api:
    default_version: 2

Improvements in commands

There are a few improvements in some commands:

  • create_rooms_file: Improved, now it supports vhosts with different config
  • evacuate_kindly: New command to kick users and prevent login (#4309)
  • join_cluster: Improved explanation: this returns immediately (since 5a34020, 24.06)
  • mod_muc_admin: Renamed arguments name to room for consistency, with backwards support (no need to update API clients)

Use non-standard STUN port

STUN via UDP can easily be abused for reflection/amplification DDoS attacks. Suggest a non-standard port to make it harder for attackers to discover the service in ejabberd.yml.example.

Modern XMPP clients discover the port via XEP-0215, so there&aposs no advantage in sticking to the standard port.

Disable the systemd watchdog by default

Some users reported ejabberd being restarted by systemd due to missing watchdog pings despite the actual service operating just fine. So far, we weren&apost able to track down the issue, so we&aposll no longer enable the watchdog in our example service unit.

Define macro as environment variable

ejabberd allows you to define macros in the configuration file since version 13.10. This allows to define a value once at the beginning of the configuration file, and use that macro to setup options values several times during the file.

Now it is possible to define the macro value as an environment variable. The environment variable name should be EJABBERD_MACRO_ + macro name.

For example, if you configured in ejabberd.yml:

define_macro:
  LOGLEVEL: 4

loglevel: LOGLEVEL

Now you can define (and overwrite) that macro definition when starting ejabberd. For example, if starting ejabberd in interactive mode:

EJABBERD_MACRO_LOGLEVEL=5 make relive

This is specially useful when using containers with slightly different values (different host, different port numbers...): instead of having a different configuration file for each container, now you can use a macro in your custom configuration file, and define different macro values as environment variable when starting each container. See some examples usages in CONTAINER&aposs composer examples

Elixir modules for authentication

ejabberd modules can be written in the Elixir programming language since ejabberd 15.02. And now, ejabberd authentication methods can also be written in Elixir!

This means you can write a custom authentication method in Erlang or in Elixir, or write an external authentication script in any language you want.

There&aposs an example authentication method in the lib/ directory. Place your custom authentication method in that directory, compile ejabberd, and configure it in ejabberd.yml:

auth_method: &aposEjabberd.Auth.Example&apos

For consistency with that file naming scheme, the old mod_presence_demo.ex has been renamed to mod_example.ex. Other minor changes were done on the Elixir example code.

Redis now supports Unix Domain Socket

Support for Unix Domain Socket was added to listener&aposs port option in ejabberd 20.07. And more recently, ejabberd 24.06 added support in sql_server when using MySQL or PostgreSQL.
That feature is useful to improve performance and security when those programs are running on the same machine as ejabberd.

Now the redis_server option also supports Unix Domain Socket.

The syntax is similar to the other options, simply setup unix: followed with the full path to the socket file. For example:

redis_server: "unix:/var/run/redis/redis.socket"

Additionally, we took the opportunity to update from the wooga/eredis erlang library which hasn&apost been updated in the last six years, to the Nordix/eredis fork which is actively maintained.

New evacuate_kindly command

ejabberd has nowadays around 180 commands to perform many administrative tasks. Let&aposs review some of their usage cases:

  • Did you modify the configuration file? Reload the configuration file and apply its changes

  • Did you apply some patch to ejabberd source code? Compile and install it, and then update the module binary in memory

  • Did you update ejabberd-contrib specs, or improved your custom module in .ejabberd-module? Call module_upgrade to compile and upgrade it into memory

  • Did you upgrade ejabberd, and that includes many changes? Compile and install it, then restart ejabberd completely

  • Do you need to stop a production ejabberd which has users connected? stop_kindly the server, informing users and rooms

  • Do you want to stop ejabberd gracefully? Then simply stop it

  • Do you need to stop ejabberd immediately, without worrying about the users? You can halt ejabberd abruptly

Now there is a new command, evacuate_kindly, useful when you need ejabberd running to perform some administrative task, but you don&apost want users connected while you perform those tasks.

It stops port listeners to prevent new client or server connections, informs users and rooms, and waits a few seconds or minutes, then restarts ejabberd. However, when ejabberd is started again, the port listeners are stopped: this allows to perform administrative tasks, for example in the database, without having to worry about users.

For example, assuming ejabberd is running and has users connected. First let&aposs evacuate all the users:

ejabberdctl evacuate_kindly 60 \"The server will stop in one minute.\"

Wait one minute, then ejabberd gets restarted with connections disabled.
Now you can perform any administrative tasks that you need.
Once everything is ready to accept user connections again, simply restart ejabberd:

ejabberdctl restart

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker...

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get support for Prometheus.

Prometheus support

Prometheus can now be used as a backend for mod_mon in addition to statsd, influxdb, influxdb2, datadog and dogstatsd.

You can expose all mod_mon metrics to Prometheus by adding a http listener pointing to mod_prometheus, for example:

  -
    port: 5280
    module: ejabberd_http
    request_handlers:
      "/metrics": mod_prometheus

You can then add a scrape config to Prometheus for ejabberd:

scrape_configs:
  - job_name: "ejabberd"
    static_configs:
      - targets:
          - "ejabberd.domain.com:5280"

You can also limit the metrics to a specific virtual host by adding it&aposs name to the path:

scrape_configs:
  - job_name: "ejabberd"
    static_configs:
      - targets:
          - "ejabberd.domain.com:5280"
     metrics_path: /metrics/myvhost.domain.com

Fix

  • PubSub: fix issue on get_item_name with p1db storage backend.

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Miscelanea

  • Elixir: support loading Elixir modules for auth (#4315)
  • Environment variables EJABBERD_MACRO to define macros
  • Fix problem starting ejabberd when first host uses SQL, other one mnesia
  • HTTP Websocket: Enable allow_unencrypted_sasl2 on websockets (#4323)
  • Relax checks for channels bindings for connections using external encryption
  • Redis: Add support for unix domain socket (#4318)
  • Redis: Use eredis 1.7.1 from Nordix when using mix/rebar3 and Erlang 21+
  • mod_auth_fast: New module with support XEP-0484: Fast Authentication Streamlining Tokens
  • mod_http_api: Fix crash when module not enabled (for example, in CT tests)
  • mod_http_api: New option default_version
  • mod_muc: Make rsm handling in disco items, correctly count skipped rooms
  • mod_offline: Only delete offline msgs when user has MAM enabled (#4287)
  • mod_priviled: Handle properly roster iq
  • mod_pubsub: Send notifications on PEP item retract
  • mod_s2s_bidi: Catch extra case in check for s2s bidi element
  • mod_scram_upgrade: Don&apost abort the upgrade
  • mod_shared_roster: The name of a new group is lowercased
  • mod_shared_roster: Get back support for groupid@vhost in displayed

Commands API

  • Change arguments and result to consistent names (API v3)
  • create_rooms_file: Improve to support vhosts with different config
  • evacuate_kindly: New command to kick users and prevent login (#4309)
  • join_cluster: Explain that this returns immediately (since 5a34020, 24.06)
  • mod_muc_admin: Rename argument name to room for consistency

Documentation

  • Fix some documentation syntax, add links to toplevel, modules and API
  • CONTAINER.md: Add kubernetes yaml examples to use with podman
  • SECURITY.md: Add security policy and reporting guidelines
  • ejabberd.service: Disable the systemd watchdog by default
  • ejabberd.yml.example: Use non-standard STUN port

WebAdmin

  • Shared group names are case sensitive, use original case instead of lowercase
  • Use lowercase username and server authentication credentials
  • Fix calculation of node&aposs uptime days
  • Fix link to displayed group when it is from another vhost

Full Changelog

https://github.com/processone/ejabberd/compare/24.10...24.12

ejabberd 24.12 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues.

by Jérôme Sautret at December 19, 2024 16:27

December 18, 2024

JMP

Newsletter: JMP at SeaGL, Cheogram now on Amazon

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

JMP at SeaGL

The Seattle GNU/Linux Conference (SeaGL) is happening next week and JMP will be there!  We’re going to have a booth with some of our employees, and will have JMP eSIM Adapters and USB card readers for purchase (if you prefer to save on shipping, or like to pay cash or otherwise), along with stickers and good conversations. :)  The exhibition area is open all day on Friday and Saturday, November 8 and 9, so be sure to stop by and say hi if you happen to be in the area.  We look forward to seeing you!

Cheogram Android in Amazon Appstore

We have just added Cheogram Android to the Amazon Appstore!  And we also added Cheogram Android to Aptoide earlier this month.  While F-Droid remains our preferred official source, we understand many people prefer to use stores that they’re used to, or that come with their device.  We also realize that many people have been waiting for Cheogram Android to return to the Play Store, and we wanted to provide this other option to pay for Cheogram Android while Google works out the approval process issues on their end to get us back in there.  We know a lot of you use and recommend app store purchases to support us, so let your friends know about this new Amazon Appstore option for Cheogram Android if they’re interested!

New features in Cheogram Android

As usual, we’ve added a bunch of new features to Cheogram Android over the past month or so.  Be sure to update to the latest version (2.17.2-1) to check them out!  (Note that Amazon doesn’t have this version quite yet, but it should be there shortly.)  Here are the notable changes since our last newsletter: privacy-respecting link previews (generated by sender), more familiar reactions, filtering of conversation list by account, nicer autocomplete for mentions and emoji, and fixes for Android 15, among many others.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

by Denver Gingerich at December 18, 2024 15:37

Newsletter: Year in Review, Google Play Update

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

As we approach the close of 2024, we want to take a moment to reflect on a year full of growth, innovation, and connection. Thanks to your support and engagement, JMP has continued to thrive as a service that empowers you to stay connected with the world using open standards and flexible technology. Here’s a look back at some of the highlights that made this year so special:

Cheogram Android

Cheogram Android, which we sponsor, experienced significant developments this year. Besides the preferred distribution channel of F-Droid, the app is also available on other platforms like Aptoide and the Amazon Appstore. It was removed from the Google Play Store in September for unknown reasons, and after a long negotiation has been restored to Google Play without modification.

Cheogram Android saw several exciting feature updates this year, including:

  • Major visual refresh
  • Animated custom emoji
  • Better Reactions UI (including custom emoji reactions)
  • Widgets powered by WebXDC for interactive chats and app extensions
  • Initial support for link previews
  • The addition of a navigation drawer to show chats from only one account or tag
  • Allowing edits to any message you have sent

This month also saw the release of 2.17.2-3 including:

  • Fix direct shares on Android 12+
  • Option to hide media from gallery
  • Do not re-notify dismissed notifications
  • Experimental extensions support based on WebXDC
  • Experimental XEP-0227 export support

Of course nothing in Cheogram Android would be possible without the hard work of the upstream project, Conversations, so thanks go out to the devs there as well.

eSIM Adapter Launch

This year, we introduced the JMP eSIM Adapter—a device that bridges the gap for devices without native eSIM support, and adds flexibility for devices with eSIM support. Whether you’re travelling, upgrading your device, or simply exploring new options, the eSIM Adapter makes it seamless to transfer eSIMs across your devices.

Engaging with the Community

This year, we hosted booths at SeaGL, FOSSY, and HOPE, connecting with all of you in person. These booths provided opportunities to learn about our services, pay for subscriptions, or purchase eSIM Adapters face-to-face.

Addressing Challenges

In 2024, we also tackled some pressing industry issues, such as SMS censorship. To help users avoid censorship and gain access to bigger MMS group chats, we’ve added new routes that you can request from our support team.

As part of this, we also rolled out the ability for JMP customers to receive calls directly over SIP.

Holiday Support Schedule

We want to inform you that JMP support will be reduced from our usual response level from December 23 until January 6. During this period, response times will be significantly longer than usual as our support staff take time with their families. We appreciate your understanding and patience.

Looking Ahead

As we move into 2025, we’re excited to keep building on this momentum. Expect even more features, improved services, and expanded opportunities to connect with the JMP community. Your feedback has been, and will always be, instrumental in shaping the future of JMP.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

by Stephen Paul Weber at December 18, 2024 15:36

December 13, 2024

Kaidan

Kaidan 0.10.1: Media Sharing and New Message Marker Fixes

This release fixes some bugs. Have a look at the changelog for more details.

Changelog

Bugfixes:

  • Fix displaying files of each message in appropriate message bubble (melvo)
  • Fix sending fallback messages for clients not supporting XEP-0447: Stateless file sharing (melvo)
  • Fix margins within message bubbles (melvo)
  • Fix hiding hidden message part (melvo)
  • Fix displaying marker for new messages (melvo)

Download

Or install Kaidan for your distribution:

Packaging status

December 13, 2024 23:00

December 10, 2024

Erlang Solutions

Meet the team: Erik Schön

In our final “Meet the Team” of 2024, we’d like to introduce you to Erik Schön, Managing Director at Erlang Solutions.

Erik shares his journey with Erlang, Elixir, and the BEAM ecosystem, from his work at Ericsson to joining Erlang Solutions in 2019. He also reflects on a key professional highlight in 2024 and looks ahead to his goals for 2025. Erik also reveals his festive traditions, including a Swedish-Japanese twist.

About Erik

So tell us about yourself and your role at Erlang Solutions.

Hello, I’m Erik! I’ve been a big fan of all things Erlang/Elixir/BEAM since the 90s, having seen many successful applications of it when working at Ericsson as an R&D manager for many years.

Since 2019, I’ve been part of the Erlang Solutions Nordic Fjällrävens (“Arctic Foxes”) team based in Stockholm, Sweden. I love helping our customers succeed by delivering faster, safer, and more efficient solutions.

What has been a professional highlight of yours in 2024?

The highlight of 2024 for me was our successful collaboration with BoardClic, a startup that helps its customers with digital board and C-suite level performance evaluations.

We started our collaboration with a comprehensive code-/architecture review of their Elixir codebase, using our 25 years of experience in delivering software for societal infrastructure, including all the do’s and don’ts for future-proof, secure, resilient, and scalable solutions.

Based on this, we boosted their development of new functionality for a strategically important customer—from idea to live, commercial operation. Two of our curious, competent collaborators, with 10+ years of practical, hands-on Elixir/Erlang/BEAM expertise, worked closely with BoardClic on-site to deliver on time and with quality.

What professional and personal achievements are you looking forward to achieving in 2025? 

Professionally, I look forward to continued success with our customers. This includes strengthening our long-standing partnerships with TV4, Telia, Ericsson, and Cisco. I’m also excited about the start of new partnerships, both inside and outside the BEAM community where we will continue to deliver more team-based, full-stack, end-to-end solutions.

Personally, I look forward to continuing to talk about my trilogy of books – The Art of Change, The Art of Leadership and The Art of Strategy – in podcasts, meetups and conferences.

Do you have any festive traditions that you’re looking forward to this holiday season?

In Sweden,julbord (a buffet-style table of small dishes including different kinds of marinated fish like herring and salmon, meatballs, ham, porridge, etc)  is a very important tradition to look forward to. Since my wife is from Japan, we always try to spice things up a bit by including suitable dishes from the Japanese kitchen, like different kinds of sushi.

Final thoughts

As we wrap up our 2024 meet-the-team series, a big thank you to Erik and all the incredible team members we’ve highlighted this year. Their passion, expertise, and dedication continue to drive our success.

Stay tuned for more insights and profiles in the new year as we introduce even more of the talented people who make Erlang Solutions what it is! if you’d like to speak more with our team, please get in touch.

The post Meet the team: Erik Schön appeared first on Erlang Solutions.

by Erlang Solutions Team at December 10, 2024 13:37

December 09, 2024

Kaidan

Kaidan 0.10.0: Too Much to Summarize!

Screenshot of Kaidan in widescreen Screenshot of Kaidan

We finally made it: Kaidan’s next release with so many features that we cannot summarize them in one sentence!

Most of the work has been funded by NLnet via NGI Assure and NGI Zero Entrust with public money provided by the European Commission. If you want Kaidan’s progress to continue and keep more free software projects alive, please share and sign the open letter for further funding!

Now to the bunch of Kaidan’s new and great features:

Group chats with invitations, user listing, participant mentioning and private/public group chat filtering are supported now. In order to use it, you need an XMPP provider that supports MIX-Core, MIX-PAM and MIX-Admin. Unfortunately, there are not many providers supporting it yet since it is a comparatively recent group chat variant.

You do not need to quote messages just to reply to them any longer. The messages are referenced internally without bloating the conversation. After clicking on a referenced message, Kaidan even jumps to it. In addition, Kaidan allows you to remove unwanted messages locally.

We added an overview of all shared media to quickly find the image you received some time ago. You can define when to download media automatically. Furthermore, connecting to the server is now really fast - no need to wait multiple seconds just to see your latest offline messages anymore.

If you enter a chat address (e.g., to add a contact), its server part is now autocompleted if available. We added filter options for contacts and group chats. After adding labels to them, you can even search by those labels. And if you do not want to get any messages from someone, you can block them.

In case you need to move to a new account (e.g., if you are dissatisfied with your current XMPP provider), Kaidan helps you with that. For example, it transfers your contacts and informs them about the move. The redesigned onboarding user interface including many fixes assists with choosing a new provider and creating an account on it.

We updated Kaidan to the API v2 of XMPP Providers to stay up-to-date with the project’s data. If you are an operator of a public XMPP provider and would like Kaidan’s users to easily create accounts on it, simply ask to add it to the provider list.

The complete list of changes can be found in the changelog section. There is also a technical overview of all currently supported features.

Please note that we currently focus on new features instead of supporting more systems. Once Kaidan has a reasonable feature set, we will work on that topic again. Even if Kaidan is making good progress, keep in mind that it is not yet a stable app.

Changelog

Features:

  • Add server address completion (fazevedo)
  • Allow to edit account’s profile (jbb)
  • Store and display delivery states of message reactions (melvo)
  • Send pending message reactions after going online (melvo)
  • Enable user to resend a message reaction if it previously failed (melvo)
  • Open contact addition as page (mobile) or dialog (desktop) (melvo)
  • Add option to open chat if contact exists on adding contact (melvo)
  • Use consistent page with search bar for searching its content (melvo)
  • Add local message removal (taibsu)
  • Allow reacting to own messages (melvo)
  • Add login option to chat (melvo)
  • Display day of the week or “yesterday” for last messages (taibsu, melvo)
  • Add media overview (fazevedo, melvo)
  • Add contact list filtering by account and labels (i.e., roster groups) (incl. addition/removal) (melvo, tech-bash)
  • Add message date sections to chat (melvo)
  • Add support for automatic media downloads (fazevedo)
  • Add filtering contacts by availability (melvo)
  • Add item to contact list on first received direct message (melvo)
  • Add support for blocking chat addresses (lnj)
  • Improve notes chat (chat with oneself) usage (melvo)
  • Place avatar above chat address and name in account/contact details on narrow window (melvo)
  • Reload camera device for QR code scanning as soon as it is plugged in / enabled (melvo)
  • Provide slider for QR code scanning to adjust camera zoom (melvo)
  • Add contact to contact list on receiving presence subscription request (melvo)
  • Add encryption key authentication via entering key IDs (melvo)
  • Improve connecting to server and authentication (XEP-0388: Extensible SASL Profile (SASL 2), XEP-0386: Bind 2, XEP-0484: Fast Authentication Streamlining Tokens, XEP-0368: SRV records for XMPP over TLS) (lnj)
  • Support media sharing with more clients even for sharing multiple files at once (XEP-0447: Stateless file sharing v0.3) (lnj)
  • Display and check media upload size limit (fazevedo)
  • Redesign message input field to use rounded corners and resized/symbolic buttons (melvo)
  • Add support for moving account data to another account, informing contacts and restoring settings for moved contacts (XEP-0283: Moved) (fazevedo)
  • Add group chat support with invitations, user listing, participant mentioning and private/public group chat filtering (XEP-0369: Mediated Information eXchange (MIX), XEP-0405: Mediated Information eXchange (MIX): Participant Server Requirements, XEP-0406: Mediated Information eXchange (MIX): MIX Administration, XEP-0407: Mediated Information eXchange (MIX): Miscellaneous Capabilities) (melvo)
  • Add button to cancel message correction (melvo)
  • Display marker for new messages (melvo)
  • Add enhanced account-wide and per contact notification settings depending on group chat mentions and presence (melvo)
  • Focus input fields appropriately (melvo)
  • Add support for replying to messages (XEP-0461: Message Replies) (melvo)
  • Indicate that Kaidan is busy during account deletion and group chat actions (melvo)
  • Hide account deletion button if In-Band Registration is not supported (melvo)
  • Embed login area in page for QR code scanning and page for web registration instead of opening start page (melvo)
  • Redesign onboarding user interface including new page for choosing provider to create account on (melvo)
  • Handle various corner cases that can occur during account creation (melvo)
  • Update to XMPP Providers v2 (melvo)
  • Hide voice message button if uploading is not supported (melvo)
  • Replace custom images for message delivery states with regular theme icons (melvo)
  • Free up message content space by hiding unneeded avatars and increasing maximum message bubble width (melvo)
  • Highlight draft message text to easily see what is not sent yet (melvo)
  • Store sent media in suitable directories with appropriate file extensions (melvo)
  • Allow sending media with less steps from recording to sending (melvo)
  • Add media to be sent in scrollable area above message input field (melvo)
  • Display original images (if available) as previews instead of their thumbnails (melvo)
  • Display high resolution thumbnails for locally stored videos as previews instead of their thumbnails (melvo)
  • Send smaller thumbnails (melvo)
  • Show camera status and reload camera once plugged in for taking pictures or recording videos (melvo)
  • Add zoom slider for taking pictures or recording videos (melvo)
  • Show overlay with description when files are dragged to be dropped on chats for being shared (melvo)
  • Show location previews on a map (melvo)
  • Open locations in user-defined way (system default, in-app, web) (melvo)
  • Delete media that is only captured for sending but not sent (melvo)
  • Add voice message recorder to message input field (melvo)
  • Add inline audio player (melvo)
  • Add context menu entry for opening directory of media files (melvo)
  • Show collapsible buttons to send media/locations inside of message input field (melvo)
  • Move button for adding hidden message part to new collapsible button area (melvo)

Bugfixes:

  • Fix index out of range error in message search (taibsu)
  • Fix updating last message information in contact list (melvo)
  • Fix multiple corrections of the same message (melvo, taibsu)
  • Request delivery receipts for pending messages (melvo)
  • Fix sorting roster items (melvo)
  • Fix displaying spoiler messages (melvo)
  • Fix displaying errors and encryption warnings for messages (melvo)
  • Fix fetching messages from server’s archive (melvo)
  • Fix various encryption problems (melvo)
  • Send delivery receipts for catched up messages (melvo)
  • Do not hide last message date if contact name is too long (melvo)
  • Fix displaying emojis (melvo)
  • Fix several OMEMO bugs (melvo)
  • Remove all locally stored data related to removed accounts (melvo)
  • Fix displaying media preview file names/sizes (melvo)
  • Fix disconnecting from server when application window is closed including timeout on connection problems (melvo)
  • Fix media/location sharing (melvo)
  • Fix handling emoji message reactions (melvo)
  • Fix moving pinned chats (fazevedo)
  • Fix drag and drop for files and pasting them (melvo)
  • Fix sending/displaying media in selected order (lnj, melvo)

Notes:

  • Kaidan is REUSE-compliant now
  • Kaidan requires Qt 5.15 and QXmpp 1.9 now

Download

Or install Kaidan for your distribution:

Packaging status

December 09, 2024 00:00

December 05, 2024

The XMPP Standards Foundation

The XMPP Newsletter November 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of November 2024.

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

XSF Announcements

XMPP Summit 27 & FOSDEM 2025

The XSF is planning the XMPP Summit 27, which is to take place on January 30th & 31st 2025 in Brussels (Belgium, Europe). Following the Summit, the XSF is also planning to be present at FOSDEM 2025, which takes place on February 1st & 2nd 2025. Find all the details in our Wiki. Please sign-up now if you are planning to attend, since this helps organizing. The event is of course open for everyone interested to participate. Spread the word within your circles!

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • Berlin XMPP Meetup (DE / EN): monthly meeting of XMPP enthusiasts in Berlin, every 2nd Wednesday of the month at 6pm local time
  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

XMPP Articles

XMPP Software News

XMPP Clients and Applications

  • Conversations has released versions 2.17.3 and 2.17.4 for Android.
  • Monocles Chat 2.0.2 has been released. This version brings MiniDNS, settings export, fixes and more!.
  • Monal has released version 6.4.6 for iOS an macOS.
  • Cheogram has released version 2.17.2-2 for Android. This release brings a chat requests feature to hide possible SPAM, with an option to report all. Additionally, it comes with several improvements and bugfixes. Also worth noting, since this last November, Cheogram is once again available for download from the Google Play Store!
Cheogram 2.17.2-2 navigation drawer with account and tag filters and SPAM control, featuring the option report all.

Cheogram 2.17.2-2 navigation drawer with account and tag filters and SPAM control, featuring the option report all.

XMPP Servers

  • Openfire 4.9.1 and 4.9.2 have been released. Version 4.9.1 is a bugfix and maintenance release, whereas version 4.9.2 is a bugfix release. You can read the full changelog for more details.
  • MongooseIM version 6.3.0 has been released. The main highlight is the complete instrumentation rework, allowing integration with Prometheus. Additionally, CockroachDB has been added to the list of supported databases for increased scalability. See the release notes for more information.
  • The (non-official) Prosody app for Yunohost has now reached a beta maturity opening it for everybody to test. This variant aims at providing better XMPP support for Yunohost users. In comparison to official Metronome and Prosody apps, this app enables A/V calls working out of the box. An optional import of rosters, MUCs, and bookmarks from Metronome is also provided. As a reminder, Yunohost is a server distribution based on Debian, which makes it easy to host a lot of services (apps) by yourself. Till the last major release (version 12), Metronome was integrated in the core installation, allowing a lot of people to discover XMPP easier (though with some limitations).

XMPP Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs Proposed this month.

New

  • Version 0.1.0 of XEP-0496 (Pubsub Node Relationships)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0497 (Pubsub Extended Subscriptions)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0498 (Pubsub File Sharing)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0499 (Pubsub Extended Discovery)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0500 (MUC Slow Mode)
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 1.0.1 of XEP-0490 (Message Displayed Synchronization)
    • Fix some examples, and their indentation.
    • Add the XML Schema. (egp)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable

  • Version 1.0.0 of XEP-0490 (Message Displayed Synchronization)
    • Accept as Stable as per Council Vote from 2024-11-05. (XEP Editor (dg)

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Schimon Zachary, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: Gonzalo Raúl Nemmi
  • German: xmpp.org
    • Translators: Millesimus

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

To unsubscribe from this list, please log in first. If you have not previously logged in, you may need to set up an account with the appropriate email address.

License

This newsletter is published under CC BY-SA license.

December 05, 2024 00:00

December 04, 2024

Erlang Solutions

Advent of Code 2024

Welcome to Advent of Code 2024!

Like every year, I start the challenge with the best attitude and love of being an Elixir programmer. Although I know that at some point, I will go to the “what is this? I hate it” phase, unlike other years, this time, I am committed to finishing Advent of Code and, more importantly, sharing it with you.

I hope you enjoy this series of December posts, where we will discuss the approach for each exercise. But remember that it is not the only one, and the idea of ​​this initiative is to have a great time and share knowledge, so don’t forget to post your solutions and comments and tag us to continue the conversation.

Let’s go for it!

Day 1: Historian Hysteria

Before starting any exercise, I suggest spending some time defining the structure that best fits the problem’s needs. If the structure is adequate, it will be easy to reuse it for the second part without further complications.

In this case, the exercise itself describes lists as the input, so we can skip that step and instead consider which functions of the Enum or List modules can be helpful.

We have this example input:

3   4

4   3

2   5     

1   3   

3   9

3   3

The goal is to transform it into two separate lists and apply sorting, comparison, etc.

List 1: [3, 4, 2, 1, 3, 3 ] 

List 2: [4, 3, 5, 3, 9, 3]

Let’s define a function that reads a file with the input. Each line will initially be represented by a string, so use String.split to separate it at each line break. 

 def get_input(path) do
   path
   |> File.read!()
   |> String.split("\n", trim: true)
 end


["3   4", "4   3", "2   5", "1   3", "3   9", "3   3"]

We will still have each row represented by a string, but we can now modify this using the functions in the Enum module. Notice that the whitespace between characters is constant, and the pattern is that the first element should go into list one and the second element into list two. Use Enum.reduce to map the elements to the corresponding list and get the following output:


%{
 first_list: [3, 3, 1, 2, 4, 3],
 second_list: [3, 9, 3, 5, 3, 4]
}

I’m using a map so that we can identify the lists and everything is clear. The function that creates them is as follows:

 @doc """
 This function takes a list where the elements are strings with two
 components separated by whitespace.


 Example: "3   4"


 It assigns the first element to list one and the second to list two,
 assuming both are numbers.
 """
 def define_separated_lists(input) do
   Enum.reduce(input, %{first_list: [], second_list: []}, fn row, map_with_lists ->
     [elem_first_list, elem_second_list] = String.split(row, "   ")


     %{
       first_list: [String.to_integer(elem_first_list) | map_with_lists.first_list],
       second_list: [String.to_integer(elem_second_list) | map_with_lists.second_list]
     }
   end)
 end

Once we have this format, we can move on to the first part of the exercise.

Part 1

Use Enum.sort to sort the lists ascendingly and pass them to the Enum.zip_with function that will calculate the distance between the elements of both. Note that we are using abs to avoid negative values, and finally, Enum.reduce to sum all the distances.

first_sorted_list = Enum.sort(first_list)
   second_sorted_list = Enum.sort(second_list)


   first_sorted_list
   |> Enum.zip_with(second_sorted_list, fn x, y -> abs(x-y) end)
   |> Enum.reduce(0, fn distance, acc -> distance + acc end)

Part 2

For the second part, you don’t need to sort the lists; use Enum.frequencies and Enum.reduce to get the multiplication of the elements.

 frequencies_second_list = Enum.frequencies(second_list)


   Enum.reduce(first_list, 0, fn elem, acc ->
     elem * Map.get(frequencies_second_list, elem, 0) + acc
   end)

That’s it. As you can see, once we have a good structure, the corresponding module, in this case, Enum, makes the operations more straightforward, so it’s worth spending some time defining which input will make our life easier.

You can see the full version of the exercise here.

Day 2: Red-Nosed Reports

The initial function receives a path corresponding to the text file with the input and reads the strings, separating them by newlines. Inside this function, we will also convert each string to a list of integers, using the Enum functions.

def get_input(path) do
   path
   |> File.read!()
   |> String.split("\n", trim: true)
   |> Enum.map(&convert_string_to_int_list(&1))
 end

With this example: 

7 6 4 2 1

1 2 7 8 9

9 7 6 2 1

1 3 2 4 5

8 6 4 4 1

1 3 6 7 9

Our output will look like this:

[
   [7, 6, 4, 2, 1],
   [1, 2, 7, 8, 9],
   [9, 7, 6, 2, 1],
   [1, 3, 2, 4, 5],
   [8, 6, 4, 4, 1],
   [1, 3, 6, 7, 9]
]

We already have a format that allows us to compare integers and validate each report individually. So, let’s do that.

Part 1

For a report to be valid, the following conditions must be met:

  • The levels are either all increasing or all decreasing.
  • Any two adjacent levels differ by at least one and at most three.

We will use Enum.sort and Enum.filter to get those lists that are sorted either ascending or descending.

Enum.filter(levels, &(is_ascending?(&1) || is_descending?(&1)))

A list is sorted ascending if it matches with Enum.sort(list)A list is sorted descending it if matches with Enum.sort(list, :desc)

defp is_ascending?(list), do: Enum.sort(list) == list
defp is_descending?(list), do: Enum.sort(list, :desc) == list

Once we have the ordered lists, we will now filter those that meet the condition that the distance between their elements is >= 1 and <= 3.

Enum.filter(levels, &valid_levels_distance?(&1, is_valid))

The valid_levels_distance function is a recursive function that iterates over the elements of the list and if it meets the condition it returns true, otherwise, it returns false. In the end, we will have the lists that meet both conditions and only need to count their elements.

path
|> get_input()
|> get_sorted_level_lists()
|> get_valid_adjacent_levels()
|> Enum.count()

Part 2

For this second part, I used a function that wraps the validations. In the previous exercise, each step is separated but here I will define the function is_a_valid_list?

defp is_a_valid_list?(list),
   do: (is_ascending?(list) || is_descending?(list)) && valid_levels_distance?(list, false)

If the list is invalid, the following function will remove each level and check if the conditions are met with this operation.

@spec remove_level_and_validate_list(list(), integer) :: list()
 def remove_level_and_validate_list(level, index) when index == length(level), do: []


 def remove_level_and_validate_list(level, index) do
   new_list = List.delete_at(level, index)


   if is_a_valid_list?(new_list) do
     new_list
   else
     remove_level_and_validate_list(level, index + 1)
   end
 end

With this, we will have all the valid lists, whether original or modified, and the last step will be to count their elements.

 path
   |> get_input()
   |> get_valid_lists_with_bad_levels()
   |> Enum.count()

I like to use recursive functions for these kinds of exercises because it’s an explicit way to check what happens at each step. But remember that we can also take advantage of Enum and have more compact code. Let me know which approach you prefer.

You can check the full version here.

Day 3: Mull It Over

Let’s start by defining a function to read a text file with the input, a simple File.read!(path). Now, according to the description, the problem screams Regular Expressions, my favorite thing in the world…

Fortunately, the Regex module provides us with everything we need, so we only have to focus on defining the correct patterns.

Spoiler: The second part of the exercise could also be solved with regular expressions, but I’ve taken a different approach, I’ll get to that.

Part 1

Our input is a string, so we can use Regex.scan to get all occurrences of mul(x,y) where x and y are integers, that is, they are made up of one or more digits. Therefore, the \d option will help us obtain them, specifying that they can be one or more digits.

This expression is enough:

~r/mul\((\d+),(\d+)\)/

The function looks like this:

def get_valid_mul_instructions(section) do
   regex_valid_multi = ~r/mul\((\d+),(\d+)\)/
   Regex.scan(regex_valid_multi, section, capture: :all_but_first)
 end

I’m taking advantage of the capture options for a slightly cleaner format, with capture: :all_but_first we directly have a list with the elements we need, for example for mul(2,4) the result would be [“2″, ” 4″].  

[["2", "4"], ["5", "5"], ["11", "8"], ["8", "5"]]

In the end, we will have a list like the following, which we can process to convert the elements into integers, multiply them, and add everything together. I used Enum.reduce.

Enum.reduce(correct_instructions, 0, fn [x, y] = _mul, acc ->
     String.to_integer(x) * String.to_integer(y) + acc
   end)

Part 2

Ah! I almost gave up on this part.

My initial idea was to define a regular expression that would replace the don’t()…do() pattern with any string, like “INVALID,” for example. That way, we would have input without the invalid blocks, and we could reuse all the code from the first section.

After a thousand failed attempts and remembering why I hate regular expressions, I completely changed the approach to use String.split. When it also failed, I realized that at some point I changed the original input, and I was never going to get the correct result… anyway. That’s why the final version ended up being much longer than I would have liked, but I invite you to try regular expressions first and take advantage of Regex to solve this second part smoothly.

In this case, the approach was to use String.split to separate the blocks every time I encountered a don’t() or do() and have a list to iterate through.

def remove_invalid_blocks(section) do
   regex = ~r/(don't[(][)]|do[(][)])/
   String.split(section, regex, include_captures: true)
 end

Something like this:

[
 "xmul(2,4)&mul[3,7]!^",
 "don't()",
 "_mul(5,5)+mul(32,64](mul(11,8)un",
 "do()",
 "?mul(8,5))",
 "don't()",
 "mul(2,3)"
]

We can add conditions so that everything between a don’t() and do() block is discarded. Once we have an input without these parts, we can apply the same procedure we used for part one.

The code ends up looking like this:

 path
   |> get_input()
   |> remove_invalid_blocks()
   |> get_valid_mul_instructions()
   |> sum_multiplication_results()

You can check the full version here.

Day 4: Ceres Search

Ah, this is one of those problems where the structure we define to work with can make things easier or more complicated. We need a matrix representation.

We could use lists to simulate the arrays of other programming languages; however, let’s consider how we will iterate to obtain the elements around a specific item.

It’s easier to have a shortcut indicating the coordinates, something like: item(3,4). With lists, we have to do a bit more manipulation, so I’ll use a map.

The idea is to transform the entry into a map that allows us constant access:

%{
 {0, 0} => "M", {0, 1} => "M", {0, 2} => "M",
 {1, 0} => "M", {1, 1} => "S", {1, 2} => "A",
 {2, 0} => "A", {2, 1} => "M", {2, 2} => "X",
}


Let’s define a function to read a text file with the input and transform each string into coordinates. For this, I will use Enum.with_index.

path
   |> File.read!()
   |> String.split("\n", trim: true)
   |> Enum.with_index(fn element, index ->        get_coordinate_format({element, index}) end)

Part 1

Once we have the expected format, then we convert the word we will search for into a list as well.

word = String.graphemes("XMAS")

Now, we filter with our input the positions where the “X” appears. This way, we will save ourselves from checking element by element and only validate those that may be the beginning of the word.

Enum.filter(input, fn {_coordinate, value} -> value == character end)


character = “M”

The output will be something like this:

 [
   {{0, 4}, "X"},
   {{9, 5}, "X"},
   {{8, 5}, "X"}
 ]

Where the format corresponds to {row, column}. And now, we will work on this list. Considering that for a coordinate {x, y} the adjacent positions are the following:

[
   {row, colum + 1},
   {row, colum - 1},
   {row + 1, colum},
   {row - 1, colum},
   {row - 1, colum + 1},
   {row - 1, colum - 1},
   {row + 1, colum + 1},
   {row + 1, colum - 1}
 ]

We will iterate in that direction by comparing with the elements. That is, if the position {x, y} = “X” and the coordinate {x, y + 1} = “M” then we will move {x, y + 1} until one of the characters is different from our word.
If we complete the word then we add 1 to our counter (Enum.reduce).

Enum.reduce(coordinates, 0, fn {coord, _elem}, occurrences ->
     check_coordinate(coord, word, input) + occurrences
   end)

Part 2

For part two use the same approach of looking for the coordinates corresponding to the character “A” to work only with those that have a chance of being what we need.

Enum.filter(input, fn {_coordinate, value} -> value == character end)


character = “A”

And for the comparison of the elements around it I used brute force haha, since there are only 4 possible combinations, I decided to add them directly.

Enum.reduce(coordinates, 0, fn {coordinate, _elem}, acc ->

     valid_x_mas(coordinate, input) + acc

   end)

def valid_x_mas({row, column} = _coordinate, input) do
   top_left = input[{row - 1, column - 1}]
   top_right = input[{row - 1, column + 1}]
   bottom_left = input[{row + 1, column - 1}]
   bottom_right = input[{row + 1, column + 1}]


   cond do
     top_left == "M" && bottom_left == "M" && top_right == "S" && bottom_right == "S" -> 1
     top_left == "M" && bottom_left == "S" && top_right == "M" && bottom_right == "S" -> 1
     top_left == "S" && bottom_left == "S" && top_right == "M" && bottom_right == "M" -> 1
     top_left == "S" && bottom_left == "M" && top_right == "S" && bottom_right == "M" -> 1
     true -> 0
   end
 end

You can check the full version here.

The post Advent of Code 2024 appeared first on Erlang Solutions.

by Lorena Mireles at December 04, 2024 08:12

November 30, 2024

Madhur Garg

Jaipur

The perfect 3 days Jaipur itinerary - Day 1: Aesthetic fort vibes Morning: Nahargarh Fort: Begin with stunning views of Jaipur city. Explore the fort’s intricate architecture and serene ambiance. Gaitor Ki Chhatriyan: Visit these beautiful royal cenotaphs for a glimpse into Jaipur’s regal history. Afternoon: Stop by Jal Mahal: Take 10-20 minutes to admire this palace...

November 30, 2024 00:00

November 29, 2024

Erlang Solutions

Optimising for Concurrency: Comparing and contrasting the BEAM and JVM virtual machines

The success of any programming language in the Erlang ecosystem can be apportioned into three tightly coupled components. They are the semantics of the Erlang programming language, (on top of which other languages are implemented), the OTP libraries and middleware (used to architect scalable and resilient concurrent systems) and the BEAM Virtual Machine tightly coupled to the language semantics and OTP.

Take any of these components on their own, and you have a runner-up. But, put the three together, and you have the uncontested winner for scalable, resilient soft-real real-time systems. To quote Joe Armstrong, “You can copy the Erlang libraries, but if it does not run on BEAM, you can’t emulate the semantics”. This is enforced by Robert Virding’s First Rule of Programming, which states that “Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.”

In this post, we want to explore the BEAM VM internals. We will compare and contrast them with the JVM where applicable, highlighting why you should pay attention to them and care. For too long, this component has been treated as a black box and taken for granted, without understanding the reasons or implications. It is time to change that!

Highlights of the BEAM

Erlang and the BEAM VM were invented to be the right tools to solve a specific problem. Ericsson developed them to help implement telecom infrastructure, handling both mobile and fixed networks. This infrastructure is highly concurrent and scalable in nature. It has to display soft real-time properties and may never fail. We don’t want our phone calls dropped or our online gaming experience to be affected by system upgrades, high user load or software, hardware and network outages. The BEAM VM solves these challenges using a state-of-the-art concurrent programming model. It features lightweight BEAM processes which don’t share memory, are managed by the schedulers of the BEAM which can manage millions of them across multiple cores, and garbage collectors running on a per-process basis, highly optimised to reduce any impact on other processes. The BEAM is also the only widely used VM used at scale with a built-in distribution model which allows a program to run on multiple machines transparently.

The BEAM VM supports zero-downtime upgrades with hot code replacement, a way to modify application code at runtime. It is probably the most cited unique feature of the BEAM. Hot code loading means that the application logic can be updated by changing the runnable code in the system whilst retaining the internal process state. This is achieved by replacing the loaded BEAM files and instructing the VM to replace the references of the code in the running processes.

It is a crucial feature for no downtime code upgrades for telecom infrastructure, where redundant hardware was put to use to handle spikes. Nowadays, in the era of containerisation, other techniques are also used for production updates. Those who have never used it dismiss it as a less important feature, but it is nonetheless useful in the development workflow. Developers can iterate faster by replacing part of their code without having to restart the system to test it. Even if the application is not designed to be upgradable in production, this can reduce the time needed for recompilation and redeployments.

Highlights of the JVM

The Java Virtual Machine (JVM) was invented by Sun Microsystem with the intent to provide a platform for ‘write once’ code that runs everywhere. They created an object-oriented language similar to C++, but memory-safe because its runtime error detection checks array bounds and pointer dereferences. The JVM ecosystem became extremely popular in the Internet era, making it the de-facto standard for enterprise server applications. The wide range of applicability was enabled by a virtual machine that caters for many use cases, and an impressive set of libraries supporting enterprise development.

The JVM was designed with efficiency in mind. Most of its concepts are abstractions of features found in popular operating systems such as the threading model which maps the VM threads to operating system threads. The JVM is highly customisable, including the garbage collector (GC) and class loaders. Some state-of-the-art GC implementations provide highly tunable features catering for a programming model based on shared memory. And, the JIT (Just-in-time) compiler automatically compiles bytecode to native machine code with the intent to speed up parts of the application.

The JVM allows you to change the code while the program is running. It is a very useful feature for debugging purposes, but production use of this feature is not recommended due to serious limitations.

Concurrency and Parallelism

We talk about parallel code execution if parts of the code are run at the same time on multiple cores, processors or computers, while concurrent programming refers to handling events arriving at the system independently. Concurrent execution can be simulated on single-core hardware, while parallel execution cannot. Although this distinction may seem pedantic, the difference results in some very different problems to solve. Think of many cooks making a plate of carbonara pasta. In the parallel approach, the tasks are split across the number of cooks available, and a single portion would be completed as quickly as it took these cooks to complete their specific tasks. In a concurrent world, you would get a portion for every cook, where each cook does all of the tasks. You use parallelism for speed and concurrency for scale.

Parallel execution tries to decompose the problem into parts that are independent of each other. Boil the water, get the pasta, mix the egg, fry the guanciale ham, and grate the pecorino cheese1. The shared data (or in our example, the serving dish) is handled by locks, mutexes and various other techniques to guarantee correctness. Another way to look at this is that the data (or ingredients) are present, and we want to utilise as many parallel CPU resources as possible to finish the job as quickly as possible.

Concurrent programming, on the other hand, deals with many events that arrive at the system at different times and tries to process all of them within a reasonable timeframe. On multi-core or distributed architectures, some of the processing may run in parallel. Another way to look at it is that the same cook boils the water, gets the pasta, mixes the eggs and so on, following a sequential algorithm which is always the same. What changes across processes (or cooks) is the data (or ingredients) to work on, which exist in multiple instances.

In summary, concurrency and parallelism are two intrinsically different problems, requiring different solutions.

Concurrency the Java way

In Java, concurrent execution is implemented using VM threads. Before the latest developments, only one threading model, called Platform Threads existed. As it is a thin abstraction layer above operating system threads, Platform Threads are scheduled in a rather simple, priority-based way, leaving most of the work to the underlying operating system. With Java 21, a new threading model was introduced, the Virtual Threads. This new model is very similar to BEAM processes since virtual threads are scheduled by the JVM, providing better performance in applications where thread contention is not negligible. Scheduling works by mounting a virtual thread to the carrier (OS) thread and unmounting it when the state of the virtual thread becomes blocked, and replacing it with a new virtual thread from the pool.

Since Java promotes the use of shared data structures, both threading models suffer from performance bottlenecks caused by synchronisation-related issues like frequent CPU cache invalidation and locking errors. Also, programming with concurrency primitives is a difficult task because of the challenges created by the shared memory model. To overcome these difficulties, there are attempts to simplify and unify the concurrent programming models, with the most successful attempt being the Akka framework. Unfortunately, it is not widely used, limiting its usefulness as a unified concurrency model, even for enterprise-grade applications. While Akka does a great job at replicating the higher-level constructs, it is somewhat limited by the lack of primitives provided by the JVM, allowing it to be highly optimised for concurrency. While the primitives of the JVM enable a wider range of use cases, they make programming distributed systems harder as they have no built-in primitives for communication and are often based on a shared memory model. For example, wherein a distributed system do you place your shared memory? And what is the cost of accessing it?

Garbage Collection

Garbage collection is a critical task for most of the applications, but applications may have very different performance requirements. Since the JVM is designed to be a ubiquitous platform, it is evident that there is no one-size-fits-all solution. There are garbage collectors designed for resource-limited environments such as embedded devices, and also for resource-intensive, highly concurrent or even real-time applications. The JVM GC interface makes it possible to use 3rd party collectors as well.

Due to the Java Memory Model, concurrent garbage collection is a hard task. The JVM needs to keep track of the memory areas that are shared between multiple threads, the access patterns to the shared memory, thread states, locks and so on. Because of shared memory, collections affect multiple threads simultaneously, making it difficult to predict the performance impact of GC operations. So difficult, that there is an entire industry built to provide tools and expertise for GC optimisation.

The BEAM and Concurrency

Some say that the JVM is built for parallelism, the BEAM for concurrency. While this might be an oversimplification, its concurrency model makes the BEAM more performant in cases where thousands or even millions of concurrent tasks should be processed in a reasonable timeframe.

The BEAM provides lightweight processes to give context to the running code. BEAM processes are different from operating system processes, but they share many concepts. BEAM processes, also called actors, don’t share memory, but communicate through message passing, copying data from one process to another. Message passing is a feature that the virtual machine implements through mailboxes owned by individual processes. It is a non-blocking operation, which means that sending a message to another process is almost instantaneous and the execution of the sender is not blocked during the operation. The messages sent are in the form of immutable data, copied from the stack of the sending process to the mailbox of the receiving one. There are no shared data structures, so this can be achieved without the need for locks and mutexes among the communicating processes, only a lock on the mailbox in case multiple processes send a message to the same recipient in parallel.

Immutable data and message passing together enable the programmer to write processes which work independently of each other and focus on functionality instead of the low-level management of the memory and scheduling of tasks. It turns out that this simple design is effective on both single thread and multiple threads on a local machine running in the same VM and, using the inter-VM communication facilities of the BEAM, across the network and machines running the BEAM. Because the messages are immutable between processes, they can be scheduled to run on another OS thread (or machine) without locking, providing almost linear scaling on distributed, multi-core architectures. The processes are handled in the same way on a local VM as in a cluster of VMs, message sending works transparently regardless of the location of the receiving process.

Processes do not share memory, allowing data replication for resilience and distribution for scale. Having two instances of the same process on a single or more separate machine, state updates can be shared with each other. If one of the processes or machines fails, the other has an up-to-date copy of the data and can continue handling requests without interruption, making the system fault-tolerant. If more than one machine is operational, all the processes can handle requests, giving you scalability. The BEAM provides highly optimised primitives for all of this to work seamlessly, while OTP (the “standard library”) provides the higher level constructs to make the life of the programmers easy.

Scheduler

We mentioned that one of the strongest features of the BEAM is the ability to run concurrent tasks in lightweight processes. Managing these processes is the task of the scheduler.

The scheduler starts, by default, an OS thread for every core and optimises the workload between them. Each process consists of code to be executed and a state which changes over time. The scheduler picks the first process in the run queue that is ready to run, and gives it a certain amount of reductions to execute, where each reduction is the rough equivalent of a BEAM command. Once the process has either run out of reductions, is blocked by I/O, is waiting for a message, or is completed executing its code, the scheduler picks the next process from the run queue and dispatches it. This scheduling technique is called pre-emptive.

We have mentioned the Akka framework many times. Its biggest drawback is the need to annotate the code with scheduling points, as the scheduling is not done at the JVM level. By removing the control from the hands of the programmer, soft real-time properties are preserved and guaranteed, as there is no risk of them accidentally causing process starvation.

The processes can be spread around the available scheduler threads to maximise CPU utilisation. There are many ways to tweak the scheduler but it is rarely needed, only for edge cases, as the default configuration covers most usage patterns.

There is a sensitive topic that frequently pops up regarding schedulers: how to handle Natively Implemented Functions (NIFs). A NIF is a code snippet written in C, compiled as a library and run in the same memory space as the BEAM for speed. The problem with NIFs is that they are not pre-emptive, and can affect the schedulers. In recent BEAM versions, a new feature, dirty schedulers, was added to give better control for NIFs. Dirty schedulers are separate schedulers that run in different threads to minimise the interruption a NIF can cause in a system. The word dirty refers to the nature of the code that is run by these schedulers.

Garbage Collector

Modern, high-level programming languages today mostly use a garbage collector for memory management. The BEAM languages are no exception. Trusting the virtual machine to handle the resources and manage the memory is very handy when you want to write high-level concurrent code, as it simplifies the task. The underlying implementation of the garbage collector is fairly straightforward and efficient, thanks to the memory model based on an immutable state. Data is copied, not mutated and the fact that processes do not share memory removes any process inter-dependencies, which, as a result, do not need to be managed.

Another feature of the BEAM is that garbage collection is run only when needed, on a per-process basis, without affecting other processes waiting in the run queue. As a result, the garbage collection in Erlang does not ‘stop the world’. It prevents processing latency spikes because the VM is never stopped as a whole – only specific processes are, and never all of them at the same time. In practice, it is just part of what a process does and is treated as another reduction. The garbage collector collecting process suspends the process for a very short interval, often microseconds. As a result, there will be many small bursts, triggered only when the process needs more memory. A single process usually doesn’t allocate large amounts of memory, and is often short-lived, further reducing the impact by immediately freeing up all its allocated memory on termination.

More about features

The features of the garbage collector are discussed in an excellent blog post by Lukas Larsson. There are many intricate details, but it is optimised to handle immutable data in an efficient way, dividing the data between the stack and the heap for each process. The best approach is to do the majority of the work in short-lived processes.

A question that often comes up on this topic is how much memory the BEAM uses. Under the hood, the VM allocates big chunks of memory and uses custom allocators to store the data efficiently and minimise the overhead of system calls. 

This has two visible effects: The used memory decreases gradually after the space is not needed, and reallocating huge amounts of data might mean doubling the current working memory. The first effect can, if necessary, be mitigated by tweaking the allocator strategies. The second one is easy to monitor and plan for if you have visibility of the different types of memory usage. (One such monitoring tool that provides system metrics that are out of the box is WombatOAM).

JVM vs BEAM concurrency

As mentioned before, the JVM and the BEAM handle concurrent tasks very differently. Under high load, shared resources become bottlenecks. In a Java application, we usually can’t avoid that. That’s why the BEAM is superior in these kinds of applications. While memory copy has a certain cost, the performance impact caused by the synchronised access to shared resources is much higher. We performed many tests to measure this impact.

JVM and the  BEAM

This chart nicely displays the large differences in performance between the JVM and the BEAM. In this test, the applications were implemented in Elixir and Java. The Elixir code compiles to the BEAM, while the Java code, evidently, compiles to the JVM.

When not to use the BEAM

It is very much about the right tool for the job. Do you need a system to be extremely fast, but are not concerned about concurrency? Handling a few events in parallel, and having to handle them fast? Need to crunch numbers for graphics, AI or analytics? Go down the C++, Python or Java route. Telecom infrastructure does not need fast operations on floats, so speed was never a priority. Aided with dynamic typing, which has to do all type checks at runtime means compile-time optimizations are not as trivial. So number crunching is best left to the JVM, Go or other languages which compile to native code. It is no surprise that floating point operations on Erjang, the version of Erlang running on the JVM, was 5000% faster than the BEAM. But where we’ve seen the BEAM shine is using its concurrency to orchestrate number crunching, outsourcing the analytics to C, Julia, Python or Rust. You do the map outside the BEAM and the reduction within the BEAM.

The mantra has always been fast enough. It takes a few hundred milliseconds for humans to perceive a stimulus (an event) and process it in their brains, meaning that micro or nano-second response time is not necessary for many applications. Nor would you use the BEAM for microcontrollers, it is too resource-hungry. But for embedded systems with a bit more processing power, where multi-core is becoming the norm and you need concurrency, the BEAM shines. Back in the 90s, we were implementing telephony switches handling tens of thousands of subscribers running in embedded boards with 16 MB of memory. How much memory does a Raspberry Pi have these days? And finally, hard real-time systems. You would probably not want the BEAM to manage your airbag control system. You need hard guarantees, something only a hard real-time OS and a language with no garbage collection or exceptions. An implementation of an Erlang VM running on bare metal such as GRiSP will give you similar guarantees.

Conclusion

Use the right tool for the job. If you are writing a soft real-time system which has to scale out of the box, should never fail, and do so without the hassle of having to reinvent the wheel, the BEAM is the battle-proven technology you are looking for.

For many, it works as a black box. Not knowing how it works would be analogous to driving a Ferrari and not being capable of achieving optimal performance or not understanding what part of the motor that strange sound is coming from. This is why you should learn more about the BEAM, understand its internals and be ready to fine-tune and fix it.

For those who have used Erlang and Elixir in anger, we have launched a one-day instructor-led course which will demystify and explain a lot of what you saw whilst preparing you to handle massive concurrency at scale. The course is available through our new instructor-led lead remote training, learn more here. We also recommend The BEAM book by Erik Stenman and the BEAM Wisdoms, a collection of articles by Dmytro Lytovchenko.

If you’d like to speak to a member of the team, feel free to drop us a message.

The post Optimising for Concurrency: Comparing and contrasting the BEAM and JVM virtual machines appeared first on Erlang Solutions.

by Attila Sragli at November 29, 2024 09:54

November 21, 2024

Ignite Realtime Blog

Florian, Dan and Dave Elected in the XSF!

In an annual vote, not one, not two, but three Ignite Realtime community members have been selected into leadership positions of the XMPP Standards Foundation! :partying_face:

The XMPP Standards Foundation is an independent, nonprofit standards development organisation whose primary mission is to define open protocols for presence, instant messaging, and real-time communication and collaboration on top of the IETF’s Extensible Messaging and Presence Protocol (XMPP). Most of the projects that we’re maintaining in the Ignite Realtime community have a strong dependency on XMPP.

The XSF Board of Directors, in which both @Flow and @dwd are elected, oversees the business affairs of the organisation. They are now in a position to make key decisions on the direction of XMPP technology and standards development, manage resources and partnerships to further the growth of the XMPP ecosystem and promote XMPP in the larger open-source and communications community, advocating for its adoption and use in various applications.

The XMPP Council, in which @danc has been reelected, is the technical steering group that approves XMPP Extension Protocols. The Council is responsible for standards development and process management. With that, Dan is now on the forefront of new developments within the XMPP community!

Congrats to you all, Dan, Dave and Florian!

For other release announcements and news follow us on Mastodon or X

2 posts - 2 participants

Read full topic

by guus at November 21, 2024 22:19

The XMPP Standards Foundation

2024 Annual Meeting and Voting Results

Every year the members of the XSF get together to vote on the current quarter’s new and renewing members. Additionally, elections for both XMPP Council and Board of Directors have been held.

This year’s election meeting was held on November 21st, 2024 and voting results can be found in the XSF Wiki.

The 2024/2025 term will be formed by the following members:

  • XMPP Council
    • Dan Caseley
    • Daniel Gultsch
    • Jérôme Poisson
    • Stephen Paul Weber
    • Marvin Wißfeld
  • Board of Directors
    • Edward Maurer
    • Ralph Meijer
    • Florian Schmaus
    • Dave Cridland
    • Arne-Bruen Vogelsang

Please congratulate them if you run across any of those listed here, but also please help us make this another great year for the XSF.

November 21, 2024 00:00