Planet Jabber

April 08, 2026

JMP

Full Text Search with IndexedDB

While working on Borogove there has been a desire to have full-text search of locally-stored chat message history. For web-based apps the main storage engine in use is IndexedDB, which is a rather low-level system and certainly doesn’t have easy facilities for this kind of operation. So what’s the simplest, performant way to implement a full-text search on IndexedDB?

In case you are wondering “what is full-text search?” for our purposes we are going to be looking at a search where we want to find any message that contains all the words the user has typed, in any order

Table Scan

While this won’t be our final solution, it is almost always the right place to start. If your data is small (say, under 10k documents) then a table scan is pretty quick, and definitely the simplest option

// Helper so we can use promises with IndexedDB
function promisifyRequest(request) {
	return new Promise((resolve, reject) => {
	  request.oncomplete = request.onsuccess = () => resolve(request.result);
	  request.onabort = request.onerror = () => reject(request.error);
	});
}

const stopwords = ["and", "if", "but"];
function tokenize(s) {
	return s.toLowerCase().split(/\s*\b/)
		.filter(w => w.length > 1 && w.match(/\w/) && !stopwords.includes(w));
}

function stemmer(s) {
	// Optional, https://www.npmjs.com/package/porter2 or similar
}

async function search(q) {
	const qTerms = new Set(tokenize(q).map(stemmer));
	const tx = db.transaction(["messages"], "readonly");
	const store = tx.objectStore("messages");
	const cursor = store.openCursor();

	const result = [];
	while (true) {
        const cresult = await promisifyRequest(cursor);
        if (!cresult?.value) break;

		if (new Set(tokenize(cresult.value.text).map(stemmer)).isSupersetOf(qTerms) {
			result.push(cresult.value);
		}
		cresult.continue();
	}

	return result;
}

Even though we aren’t doing anything fancy with the database yet, there are still a lot of important building blocks here

First we tokenize the query. This means to chunk it up into “words”. Words don’t have to be exactly real words, they can be consistent parts of words or anything else, so long as you tokenize your query and document text in the same way. Here we use a simple strategy where we trust the \b word-break regex pattern, strip any extra whitespace, ignore any empty words or words made of only non-word characters (again, as determined by \w, nothing fancy), and also ignore any stopwords. A “stopword” is a very common word that is not useful to include in the search query such as “and”. Mostly, stopwords are useful to avoid blowing up the index size later, but we include it here for now for consistency

Next we stem the tokens. This is optional and depends on the kind of search you’re trying to build. The purpose of a stemmer is to make searches for eg “flying” also match messages containing the word “fly”

Then we iterate over all the items in the object store. If you wanted the results in a particular order, you could instead iterate over an index on the store in a particular order. We tokenize and stem the text from each item and check if it contains all the “words” from qTerms, if so it is part of the results.

Use an Index

Now what if you have many messages (one million or more, perhaps) and the simple table scan is just too slow. How can we speed this up? With IndexedDB we only get one kind of index: an ordered index, usually build on a B-Tree. This is not exactly what many full-text indexes are built on, but still we can get a very big performance boost without too much more complexity.

First in the onupgradeneeded migrator we need to create the index:

tx.objectStore("messages").createIndex("terms", "terms", { multiEntry: true });

This is a “multiEntry” index which means that the index will get one entry for each item in the array stored at this key, rather than indexing on the array as a whole piece. So when we store a new message we need to include the terms as an array:

tx.objectStore("messages").put({ text, terms: [...new Set(tokenize(text).map(stemmer))] });

Now, this does not let us search by our full query, but rather only by one word. How does this help us? Well, we can iterate over only documents which match at least one of the terms in our query. Which one should we pick? Counting items is pretty fast, so let’s pick whichever one has the smallest number of results:

async function search(q) {
	const qTerms = new Set(tokenize(q).map(stemmer));
	const tx = db.transaction(["messages"], "readonly");
	const store = tx.objectStore("messages");
	const index = store.index("terms");
	// Figure out which search term matches the fewest messages
	let probeTerm = null;
	let probeScore = null;
	for (const term of qTerms) {
		const score = await promisifyRequest(index.count(IDBKeyRange.only(term)));
		if (!probeTerm || score < probeScore) {
			probeTerm = term;
			probeScore = score;
		}
	}
	// Using the smallest list of messages that match one term
	// Find the ones that match every term
	const result = [];
	const cursor = index.openCursor(IDBKeyRange.only(probeTerm));
	while (true) {
		const cresult = await promisifyRequest(cursor);
		if (!cresult?.value) break;
		if (new Set(cresult.value.terms).isSupersetOf(qTerms)) {
			result.push(cresult.value);
		}
		cresult.continue();
	}

	// Sort results
	return result.sort((a, b) => a.timestamp < b.timestamp ? -1 : (a.timestamp > b.timestamp ? 1 : 0));
}

The operation to count the index for each term is pretty fast, but if you found this prohibitive, you could also store these counts in their own keys and update them as you insert new messages. Once we know which is the smallest, we then do the same scan as before, but only over that much smaller subset. The tokenized and stemmed terms are stored so we can compare against those directly here rather than doing it again.

Now if we want it sorted we have to do it ourselves, just like a DB engine would with this kind of query where the order we want does not match the index order.

On a test set of one million messages, this simple index was enough to take the performance from unusable grinding to almost instant responses, since the number of messages in the smallest term is usually still well under 10k.

by Stephen Paul Weber at April 08, 2026 13:32

April 07, 2026

Mathieu Pasquet

Poezio 0.16.1

Here is 0.16.1, a bugfix and cleanup release for poezio, with exactly one new feature as an improvement over an existing plugin.

Poezio is a terminal-based XMPP client which aims to replicate the feeling of terminal-based IRC clients such as irssi or weechat; to this end, poezio originally only supported multi-user chats and anonymous authentication.

Features

  • Handle redacted/moderated messages in /display_corrections as well

Fixes

  • A bug in occupant comparison introduced in 0.16, leading to inconsistent MUC state in some cases.
  • Several fixes to MAM syncing and history gaps in MUC, which should be more reliable (but not perfect yet).
  • Issues around MUC self-ping (XEP-0410) where you could not ever leave a room if it was enabled.
  • A bug in the new tcp-reconnect plugin which would traceback when disconnected.
  • Loading issues with plugins defined in entrypoints, of which the most common is poezio-omemo.

Internal

  • Introduction of a harsher linting pipeline.
  • Plenty of typing and linting fixes.
  • Added poezio-omemo to the plugins list in pyproject, for easier installation.

by mathieui at April 07, 2026 19:45

Erlang Solutions

What Breaks First in Real-Time Messaging?

Real-time messaging sits at the centre of many modern digital products. From live chat and streaming reactions to betting platforms and collaborative tools, users expect communication to feel immediate and consistent.

When pressure builds, the system doesn’t typically collapse. It starts to drift. Messages arrive slightly late, ordering becomes inconsistent, and the platform feels less reliable. That drift is often the first signal that chat scalability is under strain.

Imagine a live sports final going into overtime. Millions of viewers react at once. Messages stack up, connections remain open, and activity intensifies. For a moment, everything appears stable. Then delivery slows. Reactions fall slightly out of sync. Some users refresh, unsure whether the delay is on their side or yours.

Those moments reveal whether the system was designed with fault tolerance in mind. If it was, the platform degrades predictably and recovers. If it wasn’t, small issues escalate quickly.

This article explores what breaks first in real-time messaging and how early architectural decisions determine whether users experience resilience or visible strain.

Real-Time Messaging in Live Products

Live products place real-time messaging under immediate scrutiny. On sports platforms, streaming services, online games, and live commerce sites, messaging is visible while it happens. When traffic spikes, performance issues are exposed immediately.

User expectations are already set. WhatsApp reports more than 2 billion monthly active users, shaping what instant communication feels like in practice. That expectation carries into every live experience, whether it is chat, reactions, or collaborative interaction.

Statista source real-time messaging

Source: Statista

Live environments concentrate demand rather than distribute it evenly. Traffic clusters around specific moments. Concurrency can double within minutes, and those users remain connected while message volume increases sharply. That concentration exposes limits in chat scalability far more quickly than steady growth ever would.

The operational impact tends to follow a familiar pattern:

Live scenarioSystem pressureBusiness impact
Sports finalSudden surge in concurrent usersLatency becomes public
Product launchBurst of new sessionsOnboarding friction
Viral stream momentRapid fan-out across channelsInconsistent experience
Regional spikeLocalised traffic surgeInfrastructure imbalance

For live platforms, volatility comes with the territory.

When delivery slows or behaviour becomes inconsistent, engagement drops first. Retention follows. Users rarely blame infrastructure. They blame the platform.

Designing for unpredictable load requires architecture that assumes spikes are normal and isolates failure when it occurs. If you operate a live platform, that discipline determines whether users experience seamless interaction or visible strain.

High Concurrency and Chat Scalability

In live environments, real-time messaging operates under sustained concurrency rather than occasional bursts. Users remain connected, they interact continuously, and activity compounds when shared moments occur.

High concurrency is not simply about having many users online. It involves managing thousands, sometimes millions, of persistent connections sending and receiving messages at the same time. Every open connection consumes resources, and messages may need to be delivered to large groups of active participants without delay.

This is where chat scalability really gets tested.

In steady conditions, most systems appear stable. When demand synchronises, message fan-out increases rapidly, routing paths multiply, and coordination overhead grows. Small inefficiencies that were invisible during testing begin to surface. Response times drift. Ordering becomes inconsistent. Queues expand before alerts signal a problem.

High concurrency does not introduce entirely new issues. It reveals architectural assumptions that only become visible at scale. Concurrency increases are predictable in live systems. The risk lies in whether the messaging layer can sustain that pressure without affecting user experience.

Messaging Architecture Limits

The pressure created by high concurrency does not stay abstract for long. It shows up in the messaging architecture.

When performance degrades under load, the root cause usually sits there. At scale, every message must be routed, processed, and delivered to the correct subscribers. In distributed systems, that requires coordination across servers, and coordination carries cost. Under sustained traffic, small inefficiencies compound quickly.

Routing layers can become bottlenecks when messages must propagate across multiple nodes. Queues expand when incoming traffic outpaces processing capacity. Latency increases as backlogs grow. If state drifts between nodes, messages may arrive late or appear out of sequence.

This is where the earlier discussion of chat scalability becomes tangible. It is not only about supporting more users. It is about how efficiently the architecture distributes load and maintains consistency when concurrency remains elevated.

These limits rarely appear during controlled testing with predictable traffic. They emerge under real usage, where concurrency is sustained and message patterns are uneven.

Well-designed systems account for this from the outset. They reduce coordination overhead, isolate failure domains, and scale horizontally without introducing fragility. When they do not, performance drift becomes visible long before a full outage occurs, and users feel the impact immediately.

Fault Tolerance and Scaling

If you operate a live platform, this is where design choices become visible.

Once architectural limits are exposed, the question is how your system behaves as demand continues to rise.

Scaling real-time messaging is about making sure that when components falter, the impact is contained. Distributed systems are built on a simple assumption: things break. You will see restarts, reroutes and unstable network conditions. But the real test is whether your architecture absorbs the shock or amplifies it.

Systems built with fault isolation in mind tend to recover locally. Load shifts across nodes. Individual components stabilise without affecting the wider service. Systems built around central coordination points are more vulnerable to ripple effects.

In practical terms, the difference shows up as:

  • Localised disruption rather than cascading instability
  • Brief slowdown instead of prolonged degradation
  • Controlled recovery rather than platform-wide interruption

These behaviours define whether users experience resilience or instability.

Fault tolerance determines how the system behaves when conditions are at their most demanding.

Real-Time Messaging in Entertainment

Entertainment platforms expose weaknesses in real-time messaging quickly because traffic converges rather than building steadily over time.

When a live event captures attention, users respond together. Demand rises sharply within a short window, and those users remain connected while interaction increases. The stress on the system comes not from gradual growth, but from concentrated activity.

Take the widespread Cloudflare outage in November 2025. As a core infrastructure provider handling a significant share of global internet traffic, its disruption affected major platforms simultaneously. The issue was due to underlying infrastructure, but the impact was immediate and highly visible because so many users were active at once.

Live gaming environments operate under comparable traffic patterns by design. During competitive matches on FACEIT, large numbers of players remain connected while scores, rankings, and in-game events update continuously. Activity intensifies around key moments, increasing message throughput while persistent connections stay open.

FACE IT real-time messaging

Across these environments, the pattern is consistent. Users connect simultaneously, interact continuously, and expect immediate feedback. When performance begins to drift, the impact is shared rather than isolated.

A Note on Architecture

This is where architectural choices begin to matter.

Platforms that manage sustained concurrency and recover predictably under pressure tend to share certain structural characteristics. In messaging environments, MongooseIM is one example of infrastructure designed around those principles.

In practical terms, that means:

  • Supporting large numbers of persistent connections without central bottlenecks
  • Distributing load across nodes to reduce coordination overhead
  • Containing failure within defined boundaries rather than allowing it to cascade
  • Maintaining message consistency even when traffic intensifies

These design choices do not eliminate volatility. They determine how the system behaves when it does.

In live entertainment platforms, that distinction shapes whether pressure remains internal or becomes visible to users.

Conclusion

Real-time messaging raises expectations that are easy to meet under steady conditions and far harder to sustain when attention converges.

What breaks first is rarely availability. It is timing. It is the subtle drift in delivery and consistency that users notice before any dashboard signals a failure.

Live environments make that visible because traffic arrives together and interaction compounds quickly. Concurrency is not the exception. It is the operating model. Whether the experience holds depends on how the architecture distributes load and contains failure.

Designing for that reality early makes scaling more predictable and reduces the risk of visible strain later.If you are building or modernising a platform where real-time interaction matters, assess whether your messaging architecture is prepared for sustained concurrency. Get in touch to continue the conversation.

The post What Breaks First in Real-Time Messaging? appeared first on Erlang Solutions.

by Erlang Solutions Team at April 07, 2026 12:48

ProcessOne

Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

We have shipped Fluux Messenger 0.15, and it is packed with major advancements. After the excitement of the FOSDEM launch, the team put their heads down on three things users were asking for most: the ability to find past messages, a messenger that looks and feels different from Discord, and richer collaboration tools for group rooms.

Find anything, instantly

Full-text search is now built into Fluux Messenger, every conversation, every room, searchable, locally and instantly.

The search engine runs entirely on your device using an IndexedDB inverted index with prefix matching and highlighted result snippets. No messages leave your machine to power this. For rooms with deep history on the server, results are supplemented with MAM (Message Archive Management) server-side queries, giving you the best of both worlds: speed from local cache, completeness from the archive.

A find-in-page mode (Cmd+F) lets you scan within any conversation, mirroring the browser experience developers expect.

Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

Make it yours: a full theme system

Fluux Messenger now ships with a proper design token system, three-tier (Foundation → Semantic → Component), and a theme picker bundled in the Appearance settings.

Twelve themes are available out of the box: Fluux, Dracula, Nord, Gruvbox, Catppuccin Mocha, Solarized, One Dark, Tokyo Night, Monokai, Rosé Pine, Kanagawa, and GitHub. If none of them are quite right, you can import your own theme or inject CSS snippets directly. A global accent color picker with theme-specific presets lets you go further without writing a single line of code.

Font size adjustment is now in Appearance settings as well.

Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

Polls in group rooms

Reaction-based polls are now a first-class feature in MUC rooms. Create a poll with custom emojis, set a deadline, and let participants vote. The room enforces voting rules server-side, so results are trustworthy. Polls can be closed and reopened, and an "unanswered" banner nudges participants who haven&apost voted yet. Results are visualized inline and captured in the activity log.

Fluux Messenger 0.15: Search, Themes, Polls, and a Whole Lot More

Faster connections with FAST authentication

Fluux Messenger now supports XEP-0484: FAST (Fast Authentication Streamlining Tokens) alongside SASL2. Reconnection after a network interruption or wake from sleep is now possible in the web version.

Media cache

Downloaded images are now cached to the filesystem, so they load instantly on revisit and don&apost count against your bandwidth twice. A new storage management screen in settings lets you see and clear the cache.

Polish and fixes worth noting

  • Emoji picker (emoji-mart) with dynamic viewport positioning, so it never clips off-screen
  • Last Activity (XEP-0012) — offline contacts now show how long ago they were last seen
  • Syntax highlighting for code blocks, lazily loaded per language, with theme integration and a fullscreen modal for long snippets
  • IRC-style mention detection with consistent per-user colors (XEP-0392)
  • VCard popovers on occupant and member list nicks
  • Scroll-to-bottom button now shows an unread badge and implements two-step scroll: first click jumps to the new message marker, second click goes to the very bottom
  • Particle burst animation when adding a reaction, and a message send slide-up animation
  • Upgraded to React 19 with the React Compiler for automatic memoization, and Vite 8 with lazy-loaded views for a faster startup

A note on Windows signing

Starting with 0.15, the Windows binary is no longer signed. This is a temporary decision while we work through the signing infrastructure — full details are in issue #290. On install, Windows will prompt you to manually trust the application. We know this isn&apost ideal and are working to restore signed builds.


Get it

Fluux Messenger 0.15 is available now. Download it here or check the full changelog on GitHub for every detail.

As always, feedback and bug reports are welcome. The community is the best part of building open-standard messaging software.

by Mickaël Rémond at April 07, 2026 08:44

April 06, 2026

Ignite Realtime Blog

Experimenting with MariaDB, Firebird and CockroachDB Support in Openfire

I have recently started experimenting with adding support for three additional databases in Openfire: MariaDB, Firebird and CockroachDB.

This work is still exploratory. Before committing to this direction, I would like to get a better understanding of whether this is actually valuable to the Openfire community.

I have prepared initial pull requests for each database:

These are not production-ready, but intended to validate feasibility and surface any obvious issues.

Why these databases?

MariaDB is widely used as a drop-in replacement for MySQL. Although Openfire supports MySQL, MariaDB is not explicitly treated as a first-class option. Given how often it is used in practice, formal support could provide more confidence for administrators.

Firebird represents a more niche but still relevant ecosystem. It is commonly found in long-lived, on-premise systems where changing the database is not realistic. Supporting it could make Openfire easier to adopt in those environments.

CockroachDB targets modern, distributed deployments. With its PostgreSQL compatibility and focus on resilience and scalability, it could make Openfire more attractive for cloud-native and multi-region setups.

Trade-offs

Supporting additional databases comes with a cost: more code paths, more testing, and more long-term maintenance. The key question is whether the added flexibility justifies that complexity.

Feedback wanted

Before taking this further, I would really appreciate feedback from the community:

Are you using (or considering) MariaDB, Firebird or CockroachDB with Openfire? Would official support influence your deployment decisions? Do you see this as valuable, or as unnecessary complexity?

Please share your thoughts on the pull requests or through the usual community channels!

For other release announcements and news follow us on Mastodon or X

3 posts - 3 participants

Read full topic

by guus at April 06, 2026 18:06

April 05, 2026

The XMPP Standards Foundation

The XMPP Newsletter March 2026

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of March 2026.

The XMPP Newsletter is brought to you by the XSF Communication Team. Just like any other product or project by the XSF, the Newsletter is the result of the voluntary work of its members and contributors. If you are happy with the services and software you may be using, please consider saying thanks or help these projects!

Interested in contributing to the XSF Communication Team? Read more at the bottom.

XSF Announcements

XSF Membership

Being an elected member of the XMPP Standards Foundation signals a commitment to open standards and professional engagement in / with the XMPP community. Here, your membership helps position the XSF as a healthy organization, which in itself is valuable. It also grants voting rights on technical and administrative matters within the XSF. The application is a light-weight and free of cost process and you can use your membership to get more involved more easily, too. If you are interested in joining the XMPP Standards Foundation as a member, please apply to our 2nd quarterly call for members admissions before May 17th, 2026, 00:00 UTC.

XMPP Events

  • XMPP Sprint in Berlin (DE / EN): will take place in June, from Friday 19th to Sunday 21st 2026, at the Wikimedia Deutschland e.V. offices in Berlin, Germany. If this sounds like the right event for you, come and join us! Just make sure to list yourself here, so we know how many people will attend and we can plan accordingly. If you have any questions or concerns, join us at the chatroom: sprints@muc.xmpp.org!
  • XMPP at FOSSY 2026: This year’s edition of FOSSY, the fourth Free and Open Source Software Yearly conference, will take place during the month of August, from Thursday 6th to Sunday 9th 2026 at the University of British Columbia, Vancouver, Canada. As always, there will be an XMPP Track and the call for proposals is open until April 30th, 2026. Once again, this year JMP is pleased to announce its annual offer for funding to the potential speakers who would like to host a talk on the XMPP track. Please, join them at discuss@conference.soprani.ca, and don’t hesitate to ask for more information!

Videos and Talks

XMPP Articles

XMPP Software News

XMPP Clients and Applications

  • aTalk has released versions 5.2.0 and 5.2.1 of its encrypted instant messaging with video call and GPS features for Android. The former version implements XEP-0384 (OMEMO Encryption), decryption of OMEMO messages, upgrades smack to support XEP-0420 (Stanza Content Encryption) and other relevant updates and fixes, while the latter introduces a fix for an incorrect fetching. You can check the intermediate changelog from 5.1.0 to 5.2.0 and 5.2.0 to 5.2.1 for all the details.
  • Conversations has released versions 2.19.13, 2.19.14 and 2.9.15 for Android. These releases bring fixes for a crash when changing OMEMO bundle access model, a crash when sharing Quicksy XMPP addresses and a fix for Quicksy registration on older devices, shows hats in public conferences where available, shows a warning in the chat if a contact is in different time zone and it is night time for them, a refactored automatic DND handling (based on system DND), and a warning in the chat window if contact is in DND mode. You can take a look at the changelog for all the details or check the intermediate changelogs from 2.19.12 to 2.19.13, 2.19.13 to 2.19.14 and/or 2.19.14 to 2.19.15.
Conversations showing contact’s local time in the chat window

Conversations showing contact’s local time in the chat window

  • Cheogram has released version 2.19.0-5 for Android. A bugfix release that addresses many crash fixes, never use iterative DNS for DNSSEC, fallback public server is now jabber.fr, merge security fixes from upstream, allow emoji search by emoji (eg for reactions), better isApp logic to default to commands list or not, and fix for channel avatars on some older servers. Make sure to check out the changelog for all the details.
  • Fluux Messenger has released versions 0.13.3 and 0.14.0, of its modern, cross-platform XMPP client for communities and organizations, with a list of additions, new features, improvements and bugfixes that is way longer than what we could ever mention in here! You can go straight to the full changelog or check the intermediate changelogs from 0.13.2 to 0.13.3 and/or 0.13.3 to 0.14.0 for all the details!
Fluux Messenger team chat

Fluux Messenger team chat

  • Gajim has released version 2.4.5 of its free and fully featured chat app for XMPP. This release lets you know when somebody reacted to one of your messages. It also comes with automatic timezone updates and improvements for macOS, and bugfixes. Thank you for all your contributions! You can take a look at the changelog for all the details.
Gajim automatic timezone updates

Gajim automatic timezone updates

  • Monal has released version 6.4.19 for iOS and macOS with a rather large list of fixes.
  • Monocles has released version 2.1.4 of its chat client for Android. This release brings fixes for message moderation, truncated text, link click handling, disappearing reactions popup, MUC destruction, a crash when message body not spannable, allow camera/mic in command UI webview and pick channel binding fallback when server has no XEP-0440 (SASL Channel-Binding Type Capability) support.
  • Poezio has released version 0.16 of its console XMPP client. This release implements XEP-0425 (Moderated Message Retraction) a receiving side of moderation, XEP-0424 (Message Retraction) retraction events, XEP-0377 (Blocking Command Reports) a new /report plugin to report spam, a new /tcp-reconnect to kill TCP connections on faulty networks, a tls_verify_cert option that can be set to false if the user wishes so, and several fixes. You can find all the details in the release announcement.
Moderated message retraction testing in Poezio

Moderated message retraction testing in Poezio

  • Profanity has released version 0.17.0 of its console based XMPP client written in C. The release announcement for this version is so long that it spans over 200 lines worth of information, which is a lot more than what we could ever list in here, so please make sure to read the changelog for all the details!
  • Psi+ has released version 1.5.2132.0 installer of its development branch of the Psi XMPP client.
  • Wimsy has released version 0.0.5 of it cross-platform XMPP client built with Flutter.
  • xmpp-web has released version 0.12.0 of its lightweight web chat client for XMPP server. You can read the intermediate changelog from 0.11.0 to 0.12.0 for all the details.

XMPP Servers

  • The Ignite Realtime community is happy to announce the release of Openfire 5.0.4. This release continues the efforts to provide a stable 5.0.x series releases whilst they finalize work on the upcoming 5.1.0 release. Please refer to the full changelog for all the details or to the intermediate changelog for versions 0.5.3 to 0.5.4.
  • MongooseIM has released MongooseIM 6.6.0 with more additions, changes and fixes than what we can reasonably list in here! Make sure to read the changelog for all the details!
  • ProcessOne is pleased to announce another bugfix release: ejabberd 26.03. This brings support for roster pre-approval, and more than 100 commits with bugfixes all around, many of them dedicated to the new mod_invites, including also many security fixes. Make sure to read the changelog for all the details and a complete list of fixes and improvements on this release.

XMPP Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs. Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • Explicit Mentions
    • This specification defines a way to explicitly mention a person or groups of people.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 0.2.1 of XEP-0461 (Message Replies)
    • Update the example to use the correct fallback namespace. (mye)
  • Version 0.1.1 of XEP-0473 (OpenPGP for XMPP Pubsub)
    • Fix inconsistency between text and example; it’s the key attribute that carries the shared secret ID (formerly it was secret). (jp)
  • Version 0.1.1 of XEP-0493 (OAuth Client Login)
    • Fix reference to RFC 7628 for SASL OAUTHBEARER (XEP Editor: dg)
  • Version 0.1.1 of XEP-0511 (Link Metadata)
    • Added security consideration. Added alt text to example. (spw)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • Last Call for comments on XEP-0377 (Blocking Command Reports)
    • This Last Call begins on 2026-03-31 and shall end at the close of business on 2026-04-14.

Stable

  • No stable XEPs this month.

Deprecated

  • No XEPs deprecated this month.

Rejected

  • No XEPs rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers and more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • Contributors:

    • To this issue: emus, cal0pteryx, Gonzalo Raúl Nemmi, Ludovic Bocquet, Sairam Bisoyi, XSF iTeam
  • Translations:

    • French: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
    • Italian: Mario Sabatino, Roberto Resoli
    • Portuguese: Paulo

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF GitHub repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

For this newsletter either log in here and unsubscribe or simply send an email to newsletter-leave@xmpp.org. (If you have not previously logged in, you may need to set up an account with the appropriate email address.)

License

This newsletter is published under CC BY-SA license.

April 05, 2026 00:00

April 03, 2026

Monal IM

Upgrade ejabberd on Debian NOW

Chances might be that you are running a Debian based ejabberd server. Unfortunately push for all your Monal users on that server will break in less than 2 month. And chances are that some of your S2S connections are already failing today.

Some background

The Web-PKI is moving away from certificates having bot, the TLS Web Server Authentication and the TLS Web Client Authentication extended key usage enabled. Most CAs already stopped issuing certificates with the TLS Web Client Authentication EKU set or will stop doing so in a few month.

Traditionally both servers of an S2S connections in XMPP authenticate to each other. One is the server part of a TLS handshake (and needs the server EKU), the other one is the client part of the TLS handshake (and needs the client EKU). Which one is which, solely depends upon which server starts the underlying TCP connection (that one becomes the client in this connection).

The underlying problem

This mutual authentication breaks, once the client part can’t present a certificate having the client EKU. All TLS libraries fail the authentification in this case.

The solution

Fortunately almost all TLS libraries allow users to customize the certificate validation process and ejabberd was updated in version 25.08 (August 2025) to do exactly this: ignore the EKU when validating the certificate. But unfortunately it was too late to go into Debian Trixie, the current stable Debian version.

Debian

This is where the real problem starts. Many server operators use Debian because it is so extremely stable and well maintained. Of course Debian provides point releases to upgrade packages if they have some severe bug (like not being able to properly participate in S2S connections). The next point release will contain a fix for the problem, but that will still take more than a month to reach servers.

How to fix the situation

I urge all ejabberd server operators to take one of the following steps as soon as possible:

  • Add trixie-proposed-updates to your sources.list file and update ejabberd to version 24.12-3+deb13u2
  • Install these two packages kindly built by Holger (an ejabberd developer): https://ejabberd.messaging.one/download/s2s-fix/
  • Install the official packages by ProcessOne from here: https://repo.process-one.net/ (caution: these packages use /opt/ejabberd, so you’ll need to copy your config from /etc/ejabberd over!)
  • Switch to Prosody (Prosody’s fix for the S2S client EKU problem made it into Debian Trixie)
  • Switch to some other distribution
  • If you absolutely don’t want to take any action, please enable at least dialback by adding mod_s2s_dialback: {} to the modules section of your ejabberd config. But be aware: while this will fix the S2S connection to Monal’s push servers, other servers might not have turned it on (both parts must turn it on to be effective). The security of the connection will also be degraded when using dialback rather than properly verifying certificates.

Last words

Many thanks to Holger Weiß and Philipp Huebner for fixing this bug in Debian!

April 03, 2026 00:00

March 27, 2026

Erlang Solutions

Avoiding Platform Lock-In in Regulated Environments

Platform lock-in is often discussed as a commercial issue. Organisations adopt infrastructure that works well initially and later realise that moving away from those services becomes expensive or operationally disruptive.

For platforms that run continuously under heavy demand, the consequences appear somewhere else first. They appear in architecture.

Infrastructure choices influence how systems scale, how faults are contained, and how easily the platform can evolve as requirements change. In regulated environments those decisions often remain in place for years, which means architectural flexibility matters as much as technical capability.

When infrastructure becomes tightly coupled to a particular provider, systems may still perform well day to day. The real impact usually surfaces later when workloads grow, regulations change, or operational expectations increase. At that point platform lock-in risks begin to affect reliability as well as flexibility.

Why Platform Lock-in Matters in Regulated Environments

These architectural constraints become particularly visible in regulated industries where infrastructure decisions cannot be changed casually.

Financial services platforms must maintain traceable transactions and strict audit trails. Betting platforms process large volumes of activity during live sporting events. Streaming platforms deliver real-time content to global audiences who expect uninterrupted interaction.

Systems supporting these environments often remain active for long periods, which means infrastructure decisions made early in the system’s lifecycle can shape how the platform changes years later.

The Operational Impact of Vendor Lock-in

Many organisations already recognise the risks associated with vendor lock-in. The 2024 Flexera State of the Cloud Report found that 89% of organisations now operate multi-cloud strategies, with reducing infrastructure dependency and avoiding vendor concentration cited as key motivations.

The concern goes beyond procurement strategy. When platforms rely heavily on provider-specific services for messaging, orchestration, or event processing, those dependencies begin shaping how the system behaves under load.

In regulated environments that dependency can become a reliability concern. Infrastructure decisions that once simplified development may later restrict how systems scale, evolve, or respond to operational change.

Distributed Systems Architecture and Long-Running Platforms

The reason platform lock-in becomes particularly serious in regulated environments is tied to how many of these platforms operate: as long-running distributed systems.

Large-scale entertainment services rarely behave like short-lived workloads that restart frequently. Messaging layers, real-time interaction systems, and event pipelines maintain persistent connections while processing continuous streams of activity.

Why Long-Running Systems Behave Differently

Gaming platforms illustrate this clearly. Competitive environments host thousands of players interacting simultaneously, all of whom expect consistent state across the system. Betting platforms experience similar behaviour during major sporting events when users react instantly to changing odds. Streaming platforms see comparable spikes as audiences interact during live broadcasts.

These platforms rely on distributed systems architecture that must coordinate large numbers of connections and events while remaining continuously available.

Research published by ACM Queue examining large-scale distributed systems highlights how persistent connections and real-time workloads increase coordination pressure across system components, particularly during sudden spikes in concurrency.

When coordination layers rely heavily on platform-specific services, architectural dependency gradually builds. Over time the system begins to inherit those infrastructure constraints.

Reliability Requirements in High Reliability Systems

Systems operating under these conditions often prioritise stability over rapid iteration. Platforms designed as high reliability systems must remain available while managing constant traffic, evolving workloads, and unpredictable user behaviour.

Infrastructure decisions therefore have long-term consequences. When coordination, messaging, or state management rely on proprietary platform services, architectural flexibility narrows over time.

Why Gaming, Betting and Streaming Platforms Reveal Infrastructure Limits

Systems built as long-running distributed environments face their toughest tests during moments of concentrated demand. Entertainment platforms provide a clear example.

Large audiences often react simultaneously. A football match entering extra time can trigger thousands of betting transactions within seconds. A major esports tournament can bring large numbers of players online at once. Streaming platforms experience bursts of interaction as viewers respond together during live broadcasts.

Traffic Spikes and Scalable Distributed Systems

Systems supporting these environments must function as scalable distributed systems capable of handling sudden increases in activity without losing consistency or responsiveness.

Instead of steady growth, activity often arrives in waves. Large numbers of users connect, interact, and generate events within very short timeframes. The system must coordinate these interactions across multiple nodes while maintaining reliable communication between services.

Infrastructure that appears sufficient under normal conditions can struggle during these spikes if the surrounding architecture relies too heavily on provider-specific services.

Real-World Example: BET Software

These architectural pressures are particularly visible in betting platforms where activity surges during live sporting events.

BET Software operates large-scale betting technology platforms where thousands of users interact with markets simultaneously. During major sporting events systems must process rapid updates, recalculate market information, and distribute new data to users in real time.

BET Software:  Avoiding platform lock-in in regulated environments

Their distributed systems illustrate how reliability and responsiveness become essential in environments where activity concentrates around shared moments.

Architectures designed with flexibility across infrastructure layers tend to scale and recover more predictably than those tightly coupled to provider-specific services.

Architectural Patterns to Avoid Vendor Lock-in

Recognising the risks of vendor lock-in is useful only if it leads to better architectural decisions. Systems that remain adaptable across infrastructure layers often share several structural characteristics.

Decoupling Infrastructure Dependencies

Architectures designed to avoid vendor lock-in typically separate application logic from infrastructure services wherever possible. This allows teams to evolve system components independently without redesigning the entire system,

Designing Fault Tolerant Systems

Platforms that must operate continuously also benefit from architectures designed as fault tolerant systems, where failures can be contained locally rather than cascading across the entire platform.

Common patterns include:

  • Decoupled services that scale independently
  • Communication through open protocols rather than proprietary messaging layers
  • Distributed state management instead of provider-specific coordination services
  • Horizontal scaling across nodes
  • Infrastructure abstraction layers separating application logic from provider-specific implementations
  • These approaches help ensure that infrastructure choices support the system rather than define its limitations.

These patterns help ensure that infrastructure choices support the system rather than define its limitations.

Where Elixir Supports High Reliability Systems

Technology choices also influence how easily distributed systems can maintain reliability while remaining adaptable.

Languages built on the Erlang virtual machine, including Elixir, were designed for environments where systems must remain available while handling large numbers of concurrent processes. The runtime emphasises process isolation and supervision structures that allow failures to be contained locally rather than cascading across the system.

Building Fault Tolerant Systems for Long-Running Platforms

These characteristics make the platform particularly well suited for high reliability systems that must remain active while managing heavy concurrency.

The advantage lies in the runtime model rather than any single infrastructure provider. Systems built around resilient distributed behaviour are easier to evolve because they remain stable even as infrastructure decisions change around them.

Designing Systems That Reduce Platform Lock-in

Looking across these examples reveals a consistent pattern.

Platform lock-in becomes most visible in systems that must operate continuously while adapting to changing demand. Regulated environments amplify the challenge because infrastructure decisions often remain in place for years while platforms continue to evolve.

Gaming, betting, and streaming services make these limits easier to see. Sudden spikes in activity quickly expose architectural weaknesses, and systems designed with flexible infrastructure tend to scale and recover more predictably.

If you are building platforms where reliability and long-running distributed workloads matter, it may be worth assessing how your architecture handles platform lock-in. To explore these challenges further, get in touch with the Erlang Solutions team.

The post Avoiding Platform Lock-In in Regulated Environments appeared first on Erlang Solutions.

by Erlang Solutions Team at March 27, 2026 15:26

March 25, 2026

ProcessOne

ejabberd 26.03

ejabberd 26.03

If you are upgrading from a previous version, there is a change in the SQL schemas, please read below. There are no changes in configuration, API commands or hooks.

Contents:

Changes in SQL schema

This release adds a new column to the rosterusers table in the SQL database schemas to support roster pre-approval. This task is performed automatically by ejabberd by default.

However, if your configuration file has disabled update_sql_schema toplevel option, you must perform the SQL schema update manually yourself. Those instructions are valid for MySQL, PostgreSQL and SQLite, both default and new schemas:

ALTER TABLE rosterusers ADD COLUMN approved boolean NOT NULL DEFAULT false;
ALTER TABLE rosterusers ALTER COLUMN approved DROP DEFAULT;

You can ignore the second query on SQLite.

SASL channel binding changes

This version adds the ability to configure the handling of the client flag &aposwanted to use channel-bindings but was not offered one&apos. By default, ejabberd aborts connections that present this flag, as this could indicate the presence of a rogue MITM proxy between the server and the client that strips the exchanged data of information required for this.

This can cause problems for servers that use a proxy server which terminates the TLS connection (i.e. there is a MITM proxy, but it is approved by the server administrator). To handle this situation, we have added code to ignore this flag if the server administrator disables channel binding handling by disabling the -PLUS authentication mechanisms in the configuration file:

disable_sasl_mechanisms:
  - SCRAM-SHA-1-PLUS
  - SCRAM-SHA-256-PLUS
  - SCRAM-SHA-512-PLUS

We also ignore this flag for SASL2 connections if offered authentication methods filtered by available user passwords did disable all -PLUS mechanisms.

ChangeLog

Core

  • Fix MySQL authentication for TLS connections that required auth plugin switch
  • Improve handling of scram &aposwanted to use channel-bindings but was not offered one&apos flag
  • Add ability for mod_options values to depend on other options
  • Don&apost fail to classify stand-alone chat states
  • Fix some warnings compiling with Erlang/OTP 29 (#4527)
  • ejabberd_ctl: Document how to set empty lists in ejabberdctl and WebAdmin
  • ejabberd_http: Add handling of Etag and If-Modified-Since headers to files served by mod_http_upload
  • ejabberd_http: Ignore whitespaces at end of host header
  • SQL: Add ability to mark that column can be null in e_sql_schema
  • Tests: Add tests for SASL2
  • Tests: Make table cleanup in test more robust

Modules

  • mod_fast_auth: Offered methods are based on available channel bindings
  • mod_http_api: Always hide password in log entries
  • mod_mam: Call store_mam_message hook for messages that user_mucsub_from_muc_archive was filtering out
  • mod_mam_sql: Only provide the new XEP-0431 fulltext field, not old custom withtext
  • mod_muc_room: Fix duplicate stanza-id in muc mam responses generated from local history (#4544)
  • mod_muc_room: Fix hook name in commit 7732984 (#4526)
  • mod_pubsub_serverinfo: Don&apost use gen_server:call for resolving pubsub host
  • mod_roster: Add support for roster pre-approval (#4512)
  • mod_roster: Fix display of groups in WebAdmin when it&aposs a list
  • mod_roster: in WebAdmin page, first execute SET actions, later GET
  • mod_roster_mnesia: Improve transformation code

mod_invites

  • Makefile: Run invites-deps only when files are missing
  • Fix path to bootstrap files
  • Check at start time the syntax of landing_page option (#4525)
  • Send &aposLink&apos http header (#4531)
  • Set meta.pre-auth to skip redirect_url if token validated (#4535)
  • Many security fixes (#4539)
  • Add favicon and change color to match ejabberd branding
  • Enable dark mode
  • Add support for webchat_url
  • Migrate to bootstrap5 and update jquery
  • No inline scripts
  • Make format csrf token
  • Add csrf token to failed post
  • Include js/css deps in static dir
  • Correct hashes for bootstrap 4.6.2
  • Hint at type for landing_page opt
  • Many more security fixes (#4538)
  • Check CSRF token in register form
  • Add integrity hashes to scripts and css
  • Comment unused resources
  • Add security headers
  • Remove debug log of whole query parameters (including pw)
  • Don&apost crash on unknown host from http host header
  • Make creating invite transactional
  • Set overuse limits (#4540)
  • Fix broken path when behind proxy with prefix (#4547)

Container and Installers

  • Bump Erlang/OTP 28.4.1
  • make-binaries: Bump libexpat to 2.7.5
  • make-binaries: Bump zlib to 1.3.2
  • make-binaries: Enable missing crypto features (#4542)

Translations

  • Update Bulgarian translation
  • Update Catalan and Spanish translations
  • Update Chinese Simplified translation
  • Update Czech translation
  • Update French translation
  • Update German translation

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker...

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get the following changes:

  • Add p1db backend for mod_auth_fast
  • Fix issue when cleaning MAM messages stored in p1db
  • mod_unread fixes
  • Web push fixes

Full Changelog

https://github.com/processone/ejabberd/compare/26.02...26.03

ejabberd 26.03 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues.

by Jérôme Sautret at March 25, 2026 17:14

Mathieu Pasquet

Poezio 0.16

Almost exactly one year since the last release, here is poezio 0.16.

Poezio is a terminal-based XMPP client which aims to replicate the feeling of terminal-based IRC clients such as irssi or weechat; to this end, poezio originally only supported multi-user chats and anonymous authentication.

Features

A screenshot of poezio showing several test messages, with two of them moderated, one with a reason and the other without.

  • Receiving side of moderation (XEP-0425) and retraction (XEP-0424) events
  • New /report plugin to report spam (XEP-0377)
  • New /tcp-reconnect to kill TCP connections on faulty networks (#3406).
  • Due to the CA store usage default on 0.15, the historical TOFU way of managing certs in poezio was broken. There is now a tls_verify_cert option that can be set to false if the user wishes so.

Fixes

  • Several fixes around carbons (XEP-0280) (#3626, #3627)
  • The roster is now kept until the XMPP session ends (which means on a disconnect with smacks (XEP-0198) enabled, the roster is kept until the session timeouts) (#3614)
  • Traceback in /affiliations due to slixmpp’s rust JID rewrite
  • Traceback in the /bookmarks tab
  • Infinite syncing of MAM history due to bad paging
  • Saving remote MUC bookmarks no longer force the JID as bookmark name
  • pkg_resources is no longer used.
  • Plenty of deprecationwarnings have been removed.
  • /set option=value would handle the = wrong and display the wrong thing.

Internal

  • Plenty of typing updates and fixes

Removals

  • Removal of uv as a first-party target for launch/update.sh (no vcware).
  • Removal of remaining origin-id usage.

by mathieui at March 25, 2026 17:12

March 23, 2026

Erlang Solutions

Meet the Team: Viktoria Laufer

Intro
In this edition of our Meet the Team series, we’d like to introduce Viktoria, Project Manager at Erlang Solutions.

She shares what it’s like to work across complex, multilingual projects, reflects on the highlights of her role so far, and gives a glimpse into the experiences that have shaped her journey.


What does your role as Project Manager at Erlang Solutions involve, and what have been some highlights so far?


Working as a Project Manager at Erlang Solutions is never dull. I had the opportunity to join a fascinating project focused on developing an intelligent virtual assistant for MUSE, the Italian science museum. My role spans multiple responsibilities, combining project management, Scrum Master duties, and product management.


Beyond building something innovative, we also faced the added complexity of a language barrier. Optimising a chatbot for accuracy required us to rethink how retrieval and validation function when the primary knowledge base is not in English, but in highly specialised Italian scientific language curated by museum experts.


In close collaboration with EbitMax and MUSE, we ensured that the system accurately reflects a carefully curated and continuously evolving knowledge base, while supporting interactions in Italian, German, and English.

What do you enjoy most about working at Erlang Solutions?


What I enjoy most is working closely with my team—exchanging ideas, solving problems, and learning together. I feel fortunate to collaborate across all business units, where everyone contributes to improving Tourio, our product. I also appreciate the opportunity to create new processes, continuously learn, and grow while tackling new challenges.

Most importantly, I value the flexibility of remote work. I used to travel extensively as a digital nomad; now I’m more settled, but living in a place with limited job opportunities makes this role even more meaningful to me.

Outside of work, how do you like to spend your time?


Outside of work, whenever the wind picks up, you’ll find me kitesurfing with my friends and my boyfriend. Before joining Erlang Solutions, I worked as a kite instructor, and I still occasionally teach on weekends.

When it’s not windy, I stay active with calisthenics or spinning at the gym. Sundays are usually spent cheering on my boyfriend at his local football matches. I also travel frequently to Hungary to spend time with my family and my dog—or to any destination where I can get back on the water.

Final thoughts

A big thank you to Viktoria for sharing her story with us. From leading collaborative projects to embracing new challenges, her approach reflects the curiosity and drive that shape our work at Erlang Solutions.

Stay tuned for more Meet the Team stories, where we continue to spotlight the people behind the technology.

The post Meet the Team: Viktoria Laufer appeared first on Erlang Solutions.

by Erlang Solutions Team at March 23, 2026 13:11

March 20, 2026

ProcessOne

Fluux Messenger 0.14.0 - Full Room Control & Richer Contact Profiles

Fluux Messenger 0.14.0 - Full Room Control & Richer Contact Profiles

Fluux Messenger 0.14.0 is a major release. Fluux Messenger is growing fast. Thank you to everyone contributing, testing, and spreading the word!

This release finally brings room management. Moderators can retract messages, owners can manage rooms end-to-end, contacts have real profiles now, and the fix list is longer than usual. A lot landed in this release.

What&aposs New

Full MUC Room Management

Fluux Messenger now supports the complete room lifecycle directly from the client. You can create, configure, and destroy MUC rooms without reaching for an admin console. Room owners also get full user management tools: change affiliations and roles, kick or ban occupants, and browse room directories with proper RSM pagination. A new modal lets you join any room directly by entering its JID.

This makes Fluux Messenger a serious option for teams self-hosting their own ejabberd server and managing communities day-to-day.

Fluux Messenger 0.14.0 - Full Room Control & Richer Contact Profiles

Message Moderation (XEP-0425)

Moderators can now retract messages posted by other users in MUC rooms, with full attribution and reason display. This is a meaningful step toward responsible community management inside open, sovereign messaging infrastructure — no proprietary platform required.

MUC Hat Management (XEP-0317)

Room owners can now define, assign, and remove hats for occupants via ad-hoc commands. Hats are a lightweight, expressive way to convey roles and status in a room beyond the standard affiliation model. The full hat management UI is accessible directly from the room interface.

Rich Contact Profiles with vCard (XEP-0054)

Contact information just got a lot more useful. Fluux Messenger now displays vCard data — full name, organisation, email, and country — in contact popovers and profile views. You can also edit your own vCard directly from profile settings, adding, updating, or removing fields as needed. No more opaque JIDs; your contacts now have a face and a name.

Per-Room Ignored Users (XEP-0223)

You can now ignore specific users on a per-room basis, with ignore lists stored server-side via Personal Eventing Protocol (XEP-0223). Filtering has been improved to cross-match JIDs and occupant IDs, and notifications from ignored users (including quoted replies) are now properly suppressed.

Contact Management from the Room

A new contact management dropdown in the occupant sidebar and a dedicated contact addition button in the profile screen make it easy to manage your contact list without leaving the conversation. Right-clicking (or long-pressing) a nickname in room messages now brings up an occupant context menu with quick actions.

Quality of Life

  • Entity Time (XEP-0202) — see your contact&aposs local time in the chat header and contact popover, handy for distributed teams.
  • Message delivery errors are now displayed inline, with the option to retry sending directly.
  • Do Not Disturb mode now suppresses sound and desktop notifications automatically.
  • Font size setting added to appearance preferences.
  • Avatar lightbox — click an avatar in message view to see it full-size.
  • PEP-based conversation list sync (ConversationSync module) keeps your sidebar consistent across sessions.
  • External links now open in a Tauri webview popup instead of jumping to the system browser.
  • Full-screen occupant panel on small screens for a better mobile experience.

Bug Fixes & Reliability

This release addresses a significant number of issues:

  • Active rooms now correctly move to the top of the sidebar on new messages
  • Missing room messages after reconnect or app restart
  • Blank window in MUC rooms caused by a stale ResizeObserver
  • Reactions UI properly enabled in rooms with stable occupant identity
  • Native window theme now syncs correctly with system mode in Tauri
  • Modals no longer close when click-dragging from inside to outside
  • Fixed: owner showing as moderator in the chat view
  • Navigation stack management improved for mobile

Get Fluux Messenger

Download for WindowsmacOS, or Linux on the latest Release page.

Source code is available atGitHub


Your messages, your infrastructure: no vendor lock-in.
Sovereign by design. Built in Europe, for everyone.

by Mickaël Rémond at March 20, 2026 11:13

March 18, 2026

Ignite Realtime Blog

Openfire 5.0.4 Release

The Ignite Realtime community is happy to announce a new release of its open source, real-time communications server server Openfire! Version 5.0.4 continues our effort to provide stable 5.0.x series releases whilst we finalize work on an upcoming 5.1.0 release. Please refer to the full changelog for more details.

You can obtain the new version of Openfire for your platform from its download page. The sha256sum values for the release artifacts are:

c49add8f50999b2d7fcdd8960bc7d70bf59eb95d12daedf92902e4b034c1c737  openfire-5.0.4-1.noarch.rpm
14d22bef24fb01770f51c655c8b3b54207125b1b70641175d8ad25b585e6332a  openfire_5.0.4_all.deb
ddd40e0bac4c4fae0678b6df4fd5ad28f77af50fd530e3327326f3b488f16ae4  openfire_5_0_4.dmg
8c2fcb27f9afe01b79d59f7bf0736b21cdb72b5464de25a183b596329e351099  openfire_5_0_4.exe
01c7314268d87b1f8eee0677bb89656f12a082e6461b207d3955f5d9632e2f78  openfire_5_0_4.tar.gz
13b579672b2ce238934aa919cd968636c0f5c8afda5aeb3aec08d60feca35df4  openfire_5_0_4_x64.exe
05b9e5fa976202ef97d183177f6de699cf68bf0cfd422f721a4c8dc5676c1612  openfire_5_0_4.zip

For those of you that enjoy metrics, here’s an accounting of 5.0.3 release artifact downloads.

Name OS Downloads
openfire_5_0_3_x64.exe Windows 64bit Launcher 12,407
openfire_5_0_3.exe Windows 32bit Launcher 8,269
openfire_5.0.3_all.deb Linux Deb 8,113
openfire_5_0_3.zip Zip binary 6,747
openfire-5.0.3-1.noarch.rpm Linux RPM 5,811
openfire_5_0_3.tar.gz Tar.gz binary 5,773
openfire_5_0_3.dmg Mac 4,646
Total 51,766

We’d love to hear from you! Please join our community forum or group chat and let us know what you think!

For other release announcements and news follow us on Mastodon or X

1 post - 1 participant

Read full topic

by akrherz at March 18, 2026 12:05

March 17, 2026

Erlang Solutions

Messaging as Infrastructure, Not Just a Feature

The Backbone of Real-Time Digital Services

Most platforms treat messaging as something you add. A chat module bolted on after the core product ships. A notification layer wired in once users start asking for it. A support tool that earns its place once the team has bandwidth to integrate it.

That approach made sense when digital services were simpler, but it does not hold up today.

Modern platforms depend on messaging the way infrastructure depends on foundations. Authentication flows run through it, transactions are confirmed through it, and services coordinate across distributed systems because of it. Users have live conversations on top of it, and when messaging slows or fails the whole platform feels unstable even when every other component is technically fine.

For engineering leaders responsible for systems that run continuously, that distinction matters more than most architectural decisions they will make.

Live Environments Expose Weakness Instantly

Batch-driven systems can hide problems. A delay absorbed overnight goes unnoticed, and a queue that backs up and clears before morning leaves no visible trace.

But live systems cannot afford that luxury.

Entertainment platforms, gaming ecosystems, fintech services, live commerce environments, and global support operations all run under constant visibility. Every interaction is observable in real time, and a dropped message during a live session is immediately visible to the person on the other side.

Not to mention the financial exposure is significant. Research from Gartner estimates average IT downtime costs at around $5,600 per minute, a figure that rises sharply during peak demand. The reputational cost can be harder to recover from. Research from PwC found that 32 percent of customers would stop using a brand they had actively liked after a single poor experience, and in digital services, that experience is often a delayed response, a failed notification, or a conversation that disconnects mid-session.

Live environments remove the tolerance for fragility that batch systems enjoy, and when messaging is unreliable, user trust soon follows.

Scalability Without Responsiveness Is Not Enough

Live systems expose fragility quickly, and scale is usually the next pressure that follows.

When teams talk about scalability, they’re usually talking about volume: more concurrent users, more connections, higher throughput, bigger numbers across the board. But capacity alone doesn’t guarantee a good experience.

A system can remain technically online under heavy load while the quality of the experience quietly begins to slip. Latency creeps upward, message delivery becomes less predictable, and failover processes interrupt sessions that previously felt stable. From an infrastructure perspective the system is still operating, but from a user’s perspective the experience is already degrading.

True scalability is really about maintaining responsiveness as pressure increases. That means designing architecture with those conditions in mind from the beginning: the ability to expand horizontally across distributed nodes, load balancing that distributes traffic intelligently instead of creating centralised bottlenecks, consistent state across clusters, and no single points of failure. Just as importantly, delivery guarantees still need to hold even when network conditions are less than ideal.

Resilience isn’t something you bolt on later once growth arrives. Networks partition, nodes fail, and traffic spikes appear without warning. Systems built on the assumption that conditions will remain stable tend to reveal their weaknesses at exactly the moment when the consequences are most expensive. Organisations that design their messaging architecture with those realities in mind avoid the most painful scaling problems.

Internal Communication as Operational Infrastructure

The same messaging systems that support customer interactions also coordinate how organisations operate internally.

Customer-facing communication tends to attract most of the attention, but the internal messaging infrastructure behind it is just as strategically important.

Hybrid and distributed teams rely on real-time communication to stay coordinated, while operational systems depend on alerting pipelines that surface anomalies before they escalate into incidents. Engineering teams also need observability data flowing continuously across services so they can understand what is happening inside complex systems and respond quickly when something changes.

When internal communication becomes fragmented, the effects are immediate. Decision-making slows as context becomes siloed across tools, incident response turns reactive rather than proactive, and support agents end up jumping between systems to assemble information that a unified platform could surface instantly. Engineers face a similar challenge, losing visibility into behaviour that spans multiple services and environments.

That internal alignment ultimately shapes the external customer experience. Research from Salesforce shows that 73 percent of customers expect companies to understand their specific needs, and meeting that expectation depends on systems capable of maintaining and propagating context in real time. 

In many ways, whether personalisation is practical or merely aspirational comes down to architecture.

Omnichannel Expectations Demand Unified Architecture

The pressure on messaging systems does not stop inside the organisation. Customers now expect the same continuity across every channel they use.

Customers move constantly between web platforms, mobile apps, in-app messaging, and social channels, but they don’t experience those interactions as separate systems. To them it’s all part of one ongoing relationship with a brand.

That expectation shows up clearly in the data. Zendesk reports that 70 percent of customers expect anyone they interact with to have the full context of their situation, and they have very little patience for experiences that force them to repeat themselves when they switch devices or move between platforms.

Meeting that expectation isn’t just a matter of adding more channels. What actually matters is whether the systems behind those channels share context. Without a unified messaging backbone that maintains identity, presence, and session continuity across distributed systems, every new touchpoint risks becoming another silo.

This is where architecture starts to matter. Many organisations assemble their communication stack gradually from disconnected tools, and those systems were never designed to maintain continuity across environments. By contrast, distributed platforms built around clustering, federation, and consistent state management make it possible to carry context across services and channels in a reliable way.

In other words, context persistence isn’t really a user interface feature. It’s the result of architectural decisions made much deeper in the stack.

Growth Exposes Architectural Weakness

Architectures that support seamless cross-channel experiences must also survive rapid growth.

Messaging systems often work well at moderate scale. The real pressure arrives once growth starts to change the shape of the system.

A product launch can suddenly create concurrency spikes that never appeared during development. Expanding into new regions introduces latency sensitivities that flat traffic patterns never exposed. A successful marketing campaign drives engagement surges that the original architecture was never really designed to handle.

When messaging has been treated as just another product feature, scaling usually means significant re-engineering at exactly the wrong moment. Vertical scaling starts to introduce bottlenecks, point integrations multiply as teams patch things together, and operational complexity grows with every workaround. Before long, the team that should be building new capabilities is stuck rebuilding the foundations instead.

Infrastructure-grade messaging grows in a very different way. Capacity expands horizontally as demand increases, rather than running headfirst into ceiling after ceiling. Clustered deployments distribute load while maintaining consistent state, and fault tolerance helps isolate failures instead of letting them cascade across the system. Because the platform is designed to be extensible, new services can integrate without destabilising everything that already works.

The result is that scalability stops being an emergency response and becomes a built-in property of the system.

Reliability, User Experience, and Brand Perception

As systems scale, reliability becomes visible not just to engineers but to users.

Reliability is measured inside organisations through uptime dashboards and incident reports, but outside the organisation it is measured through trust.

Research from HubSpot shows that 90 percent of customers consider an immediate response important when they have a support query. In real-time environments, immediate increasingly means seconds rather than minutes, and tolerance for anything slower continues to shrink.

When communication systems fail during periods of peak demand, users are not reading the engineering post-mortem. They are forming a perception of the platform. Notifications that arrive too late during critical moments, conversations that cut off mid-session, and responses that stall when they matter most all shape how a system is experienced. Those moments shape how a platform is understood far more powerfully than any feature release.

Engineering leaders who treat messaging as foundational infrastructure reduce that exposure. Systems designed for distribution, clustering, and fault tolerance can maintain consistency under load, preserve state during failover, and absorb traffic spikes without visible degradation.

In real-time systems, reliability ultimately shows up as user confidence, especially during the moments when attention is highest.

Infrastructure Is a Strategic Choice

Taken together, these pressures change how messaging needs to be designed inside modern platforms.

Always-on digital environments do not wait for architecture to catch up. Communication flows continuously through modern platforms, carrying transactions, context, and operational signals across distributed systems. When those flows hold steady, the platform does too.

Treating messaging as just another feature underestimates the role it plays. Treating it as infrastructure reflects how modern platforms actually operate and what it costs when communication breaks down.

Organisations that design messaging as core infrastructure give themselves something valuable: the ability to operate confidently under real conditions. They sustain responsiveness as demand grows, maintain continuity across channels, and protect user trust during the moments that matter most.

If you are assessing the resilience and scalability of your real-time messaging architecture, get in touch.

The post Messaging as Infrastructure, Not Just a Feature appeared first on Erlang Solutions.

by Mateusz Starański at March 17, 2026 11:35

March 14, 2026

Mathieu Pasquet

slixmpp v1.14.1

This release brings supports for three new XEPs, fixes one specific bug that was breaking poezio, and has a lot of internal improvements including typing hings and documentation.

Thanks to everyone involved for this release!

I caught a new bug minutes after releasing, so the version to take is actually 1.14.1 and not 1.14.0!

Removals

  • The obsolete UnescapedJID class was removed.

Features

  • New plugin: XEP-0462 (Pubsub Type Filtering)
  • New plugin: XEP-0511 (Link Metadata)
  • New plugin: XEP-0478 (Stream Limits Advertisment)
  • StanzaBase objects now have a pretty_print() method, for better readability.
  • XEP-0050: use of the slixmpp "internal API" to handle commands
  • Added the missing unescape_node() method on the rust JID implementation

Fixes

  • Fixed a bug in XEP-0410 when changing nicks

Documentation

  • Added a lot of docstrings, and improved other ones
  • Lots of typing updates as well

Internal, CI & build

  • Doctests are now run
  • Doc dependencies are now part of the pyproject file
  • pyo3 update
  • performance micro-optimization on session establishment

by mathieui at March 14, 2026 13:42

March 12, 2026

Erlang Solutions

What Breaks First in Real-Time Messaging?

Real-time messaging sits at the centre of many modern digital products. From live chat and streaming reactions to betting platforms and collaborative tools, users expect communication to feel immediate and consistent.

When pressure builds, the system doesn’t typically collapse. It starts to drift. Messages arrive slightly late, ordering becomes inconsistent, and the platform feels less reliable. That drift is often the first signal that chat scalability is under strain.

Imagine a live sports final going into overtime. Millions of viewers react at once. Messages stack up, connections remain open, and activity intensifies. For a moment, everything appears stable. Then delivery slows. Reactions fall slightly out of sync. Some users refresh, unsure whether the delay is on their side or yours.

Those moments reveal whether the system was designed with fault tolerance in mind. If it was, the platform degrades predictably and recovers. If it wasn’t, small issues escalate quickly.

This article explores what breaks first in real-time messaging and how early architectural decisions determine whether users experience resilience or visible strain.

Real-Time Messaging in Live Products

Live products place real-time messaging under immediate scrutiny. On sports platforms, streaming services, online games, and live commerce sites, messaging is visible while it happens. When traffic spikes, performance issues are exposed immediately.

User expectations are already set. WhatsApp reports more than 2 billion monthly active users, shaping what instant communication feels like in practice. That expectation carries into every live experience, whether it is chat, reactions, or collaborative interaction.

Statistica real-time messaging

Source: Statista

Live environments concentrate demand rather than distribute it evenly. Traffic clusters around specific moments. Concurrency can double within minutes, and those users remain connected while message volume increases sharply. That concentration exposes limits in chat scalability far more quickly than steady growth ever would.

The operational impact tends to follow a familiar pattern:

Live scenarioSystem pressureBusiness impact
Sports finalSudden surge in concurrent usersLatency becomes public
Product launchBurst of new sessionsOnboarding friction
Viral stream momentRapid fan-out across channelsInconsistent experience
Regional spikeLocalised traffic surgeInfrastructure imbalance

For live platforms, volatility comes with the territory.

When delivery slows or behaviour becomes inconsistent, engagement drops first. Retention follows. Users rarely blame infrastructure. They blame the platform.

Designing for unpredictable load requires architecture that assumes spikes are normal and isolates failure when it occurs. If you operate a live platform, that discipline determines whether users experience seamless interaction or visible strain.

High Concurrency and Chat Scalability

In live environments, real-time messaging operates under sustained concurrency rather than occasional bursts. Users remain connected, they interact continuously, and activity compounds when shared moments occur.

High concurrency is not simply about having many users online. It involves managing thousands, sometimes millions, of persistent connections sending and receiving messages at the same time. Every open connection consumes resources, and messages may need to be delivered to large groups of active participants without delay.

This is where chat scalability really gets tested.

In steady conditions, most systems appear stable. When demand synchronises, message fan-out increases rapidly, routing paths multiply, and coordination overhead grows. Small inefficiencies that were invisible during testing begin to surface. Response times drift. Ordering becomes inconsistent. Queues expand before alerts signal a problem.

High concurrency does not introduce entirely new issues. It reveals architectural assumptions that only become visible at scale. Concurrency increases are predictable in live systems. The risk lies in whether the messaging layer can sustain that pressure without affecting user experience.

Messaging Architecture Limits

The pressure created by high concurrency does not stay abstract for long. It shows up in the messaging architecture.

When performance degrades under load, the root cause usually sits there. At scale, every message must be routed, processed, and delivered to the correct subscribers. In distributed systems, that requires coordination across servers, and coordination carries cost. Under sustained traffic, small inefficiencies compound quickly.

Routing layers can become bottlenecks when messages must propagate across multiple nodes. Queues expand when incoming traffic outpaces processing capacity. Latency increases as backlogs grow. If state drifts between nodes, messages may arrive late or appear out of sequence.

This is where the earlier discussion of chat scalability becomes tangible. It is not only about supporting more users. It is about how efficiently the architecture distributes load and maintains consistency when concurrency remains elevated.

These limits rarely appear during controlled testing with predictable traffic. They emerge under real usage, where concurrency is sustained and message patterns are uneven.

Well-designed systems account for this from the outset. They reduce coordination overhead, isolate failure domains, and scale horizontally without introducing fragility. When they do not, performance drift becomes visible long before a full outage occurs, and users feel the impact immediately.

Fault Tolerance and Scaling

If you operate a live platform, this is where design choices become visible.

Once architectural limits are exposed, the question is how your system behaves as demand continues to rise.

Scaling real-time messaging is about making sure that when components falter, the impact is contained. Distributed systems are built on a simple assumption: things break. You will see restarts, reroutes and unstable network conditions. But the real test is whether your architecture absorbs the shock or amplifies it.

Systems built with fault isolation in mind tend to recover locally. Load shifts across nodes. Individual components stabilise without affecting the wider service. Systems built around central coordination points are more vulnerable to ripple effects.

In practical terms, the difference shows up as:

  • Localised disruption rather than cascading instability
  • Brief slowdown instead of prolonged degradation
  • Controlled recovery rather than platform-wide interruption

These behaviours define whether users experience resilience or instability.

Fault tolerance determines how the system behaves when conditions are at their most demanding.

Real-Time Messaging in Entertainment

Entertainment platforms expose weaknesses in real-time messaging quickly because traffic converges rather than building steadily over time.

When a live event captures attention, users respond together. Demand rises sharply within a short window, and those users remain connected while interaction increases. The stress on the system comes not from gradual growth, but from concentrated activity.

Take the widespread Cloudflare outage in November 2025. As a core infrastructure provider handling a significant share of global internet traffic, its disruption affected major platforms simultaneously. The issue was due to underlying infrastructure, but the impact was immediate and highly visible because so many users were active at once.

Live gaming environments operate under comparable traffic patterns by design. During competitive matches on FACEIT, large numbers of players remain connected while scores, rankings, and in-game events update continuously. Activity intensifies around key moments, increasing message throughput while persistent connections stay open.

FACEIT real time messaging

Across these environments, the pattern is consistent. Users connect simultaneously, interact continuously, and expect immediate feedback. When performance begins to drift, the impact is shared rather than isolated.

A Note on Architecture

This is where architectural choices begin to matter.

Platforms that manage sustained concurrency and recover predictably under pressure tend to share certain structural characteristics. In messaging environments, MongooseIM is one example of infrastructure designed around those principles.

In practical terms, that means:

  • Supporting large numbers of persistent connections without central bottlenecks
  • Distributing load across nodes to reduce coordination overhead
  • Containing failure within defined boundaries rather than allowing it to cascade
  • Maintaining message consistency even when traffic intensifies

These design choices do not eliminate volatility. They determine how the system behaves when it does.

In live entertainment platforms, that distinction shapes whether pressure remains internal or becomes visible to users.

Conclusion

Real-time messaging raises expectations that are easy to meet under steady conditions and far harder to sustain when attention converges.

What breaks first is rarely availability. It is timing. It is the subtle drift in delivery and consistency that users notice before any dashboard signals a failure.

Live environments make that visible because traffic arrives together and interaction compounds quickly. Concurrency is not the exception. It is the operating model. Whether the experience holds depends on how the architecture distributes load and contains failure.

Designing for that reality early makes scaling more predictable and reduces the risk of visible strain later.If you are building or modernising a platform where real-time interaction matters, assess whether your messaging architecture is prepared for sustained concurrency. Get in touch to continue the conversation.

The post What Breaks First in Real-Time Messaging? appeared first on Erlang Solutions.

by Erlang Solutions Team at March 12, 2026 11:56

March 09, 2026

Erlang Solutions

Reliability is a Product Decision

Reliability is often treated as something that can be improved once a system is live. When things break, the focus shifts to monitoring, incident response, and recovery, with the belief that resilience can be strengthened over time as scale reveals weaknesses.

In reality, most of it is set much earlier.

Long before a system faces sustained demand, its underlying design has already shaped how it will respond under pressure. Choices about service boundaries, data handling, deployment models, and fault management influence whether a problem stays contained or spreads.

The conversation is gradually moving from reliability to resilience because distributed systems rarely operate without failure. The more useful question is how a platform continues running when parts of it inevitably fail. The sections that follow explore how early architectural decisions shape that outcome, why their impact becomes more visible at scale, and what it means to build resilience from the beginning rather than react to it later.

Early Decisions Create Long-Term Behaviour

Large-scale failures rarely emerge without warning. What appears sudden at scale is often the predictable outcome of structural decisions made earlier, when different commercial pressures shaped priorities. 

In the early stages of a product, the focus is understandably on delivering value quickly, reducing development friction, and validating the market. These are rational business decisions. However, architecture chosen primarily for speed can quietly define the operational ceiling of the system, setting limits that only become visible once demand increases.

Systems Behave as They Were Built to Behave

Outages are often described as “unexpected events,” but distributed systems typically respond to pressure in ways that reflect their design. How services communicate, how state is shared, where dependencies sit, and how failure is managed all influence whether disruption remains contained within a single component or spreads across the wider platform.

Research from Google’s Site Reliability Engineering work shows that around 70% of outages are caused by changes to a live system, such as configuration updates, deployments, or operational changes, rather than by hardware failures. Similarly, the Uptime Institute’s Annual Outage Analysis identifies configuration errors and dependency failures as leading causes of major disruption.

These findings are unsurprising. In distributed environments, dependencies increase and recovery paths become harder to trace, which means that architectural shortcuts that once seemed minor can have disproportionate impact under sustained load. Systems tend to fail along the structural lines already drawn into them, and those lines are shaped by early design decisions, even when those decisions were commercially sensible at the time.

Trade-offs That Compound Over Time

Architectural decisions are rarely made under ideal conditions. Early on, speed to market matters, simplicity reduces friction, and shipping is the priority. A tightly coupled service can help teams move faster, a single-region deployment keeps things straightforward, and limited observability may feel acceptable when traffic is still modest.

But overtime, these trade-offs compound.

  • Limited isolation between services makes it easier for problems in one area to affect others.
  • Shared infrastructure can create hidden dependencies that only become visible under heavy demand.
    Concentrated regional deployments increase the impact of a local outage or cloud disruption.
  • Observability that felt sufficient at launch can fall short when trying to understand complex behaviour at scale.

At a smaller scale, these constraints can go largely unnoticed. As usage increases and demand becomes less predictable, they start to shape how the system responds under pressure. What once felt manageable begins to show its limits.

This is rarely about a lack of technical ability. It is simply what happens as complexity builds over time. Every system reflects the trade-offs made in its early stages, whether those choices were deliberate or just practical at the time.

When Architecture Becomes Business Exposure

As systems grow in scale and complexity, the way they are built starts to show up in practical ways. When services are tightly connected, recovery takes longer. When failures are not well contained, a problem in one area can disrupt others. Incidents become harder to resolve and more expensive to manage.

The cost of disruption is not abstract. ITIC’s 2023 Hourly Cost of Downtime Survey reports that more than 90% of mid-size and large enterprises estimate a single hour of downtime costs over $300,000, and roughly 41% place that figure between $1 million and $5 million per hour. At that level, even short-lived incidents carry material financial impact.

For organisations that rely on digital platforms to generate revenue, those numbers represent missed transactions, operational strain, and damage to customer trust. At that point, system design is no longer just an engineering decision. It becomes a business decision with measurable financial consequences.

When Failure Is Public

Some systems fail quietly, disrupting internal workflows or back-office processes with limited external visibility. Others operate in real time, where performance issues are experienced directly by customers, investors, and partners.

In sectors such as entertainment, demand is often synchronised and predictable. Premieres, sporting events, ticket releases, and major launches concentrate traffic into specific windows, placing simultaneous pressure on application layers, databases, and third-party services. These moments are not unusual spikes; they are built into the operating model. Platforms designed for large-scale engagement are expected to handle peak demand as part of normal business activity.

That expectation changes the stakes. When performance degrades in these environments, it is noticed immediately and often publicly. Frustration spreads quickly, confidence can shift in hours, and what might have been an operational issue becomes a visible business problem.

In this context, resilience shapes whether a high-demand event reinforces confidence in the platform or exposes its limits. When failure is experienced directly by users, it moves beyond internal metrics and becomes part of the customer experience itself.

Designing for Resilience

If failure is inevitable in distributed systems, then resilience has to be built in from the start. It cannot be something added later when the first serious incident forces the issue.

Resilient systems are structured so that problems stay contained. A fault in one component should not automatically take others down with it, and services should be able to keep operating even when parts of the system are degraded. External dependencies will fail. Traffic will spike. The design needs to account for that reality.

This way of thinking shifts the focus. Instead of trying to prevent every possible issue, teams concentrate on limiting the impact when something goes wrong. Speed still matters, but so does the ability to grow without introducing instability.

Technology choices can support that approach. Elixir programming language, running on the BEAM, was designed for environments where downtime had real consequences. Its structure reflects that:

  • Applications are made up of many small, independent processes rather than large, tightly connected components.
  • Failures are expected and handled locally.
  • Supervision and recovery are built into the runtime so the wider system keeps running.

No language guarantees reliability, but tools built around fault tolerance make it easier to create systems that continue operating under pressure.

To conclude

By the time serious issues appear at scale, most of the important decisions have already been made.

Failure is part of running distributed systems. What matters is whether problems stay contained and whether the platform keeps operating when something goes wrong.

Thinking about resilience early makes growth easier later. It helps protect revenue, maintain trust, and avoid the instability that forces costly redesigns.If you are building distributed platforms where reliability directly affects performance and reputation, now is the time to treat resilience as a core design decision. Get in touch to discuss how to build it into your architecture from the start.

The post Reliability is a Product Decision appeared first on Erlang Solutions.

by Erik Schön at March 09, 2026 09:10

March 05, 2026

The XMPP Standards Foundation

The XMPP Newsletter February 2026

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of February 2026.

The XMPP Newsletter is brought to you by the XSF Communication Team.

Just like any other product or project by the XSF, the Newsletter is the result of the voluntary work of its members and contributors. If you are happy with the services and software you may be using, please consider saying thanks or help these projects!

Interested in contributing to the XSF Communication Team? Read more at the bottom.

XSF Announcements

XMPP logo in Font Awesome

Just as we announced back in our Newsletter December 2025 issue, the official XMPP logo now comes bundled up in Font Awesome since version 7.2.0. And it looks Awesome!

XMPP Events

Four recipes for an open chat protocol at the Digital Independence Day!

Four recipes for an open chat protocol at the Digital Independence Day!

  • XMPP Sprint in Berlin (DE / EN): will take place from June Friday 19th to Sunday 21st 2026, at the Wikimedia Deutschland e.V. offices in Berlin, Germany. If this sounds like the right event for you, come and join us! Just make sure to list yourself here, so we know how many people will attend and we can plan accordingly. If you have any questions or concerns, join us at the chatroom: sprints@muc.xmpp.org!

Videos and Talks

XMPP Articles

XMPP Software News

XMPP Clients and Applications

  • Conversations has released versions 2.19.10, 2.19.11 and 2.19.12 for Android. This version introduces a refactored QR code scanning and URI handling, a fix for rotation issues in tablet mode and also offers to delete messages when banning someone in a public channel among other things. You can take a look at the changelog for all the details.
  • Fluux Messenger, a modern, cross-platform XMPP client for communities and organizations, versions 0.11.3, 0.12.0, 0.12.1, 0.13.0, 0.13.1 and 0.13.2 have been released, with a list of additions, new features, improvements and bugfixes that is way longer than what we could ever mention in here! Go straight for the changelog for all the details!
Fluux Messenger main window and XMPP console

Fluux Messenger main window and XMPP console

  • Gajim has released version 2.4.4 of its free and fully featured chat app for XMPP. This release comes with link previews, many improvements for macOS, and bugfixes. Thank you for all your contributions! You can take a look at the changelog for all the details.
Link previews in Gajim 2.4.4

Link previews in Gajim 2.4.4

  • Monal has released versions 6.4.18 for iOS and macOS.
  • Monocles has released versions 2.1.2 and 2.1.3 of its chat client for Android. The former release brings fixes for message retraction, images sent as link and infinite recursion in TagEditorView, adds support for links in posts, disables publish button after click to prevent double posts, refactored message correction UI and a change in the social feed pubsub access model. The latest, adds pause and resume story on delete dialog, fixes for progress bar handling and contact lookup for stories and brings fixes from Conversations, plus updated translations.
  • Profanity has released version 0.16.0 of its console based XMPP client written in C. This release brings fixes for OTR detection, OMEMO startup, overwriting new accounts when running multiple instances, reconnect when no account has been set up yet, adds a new /changes command that allows the user to compare the modifications of the runtime configuration and the saved configuration among many other fixes and improvements. Make sure to read the changelog for all the details!
  • xmpp-web has released version 0.11.0 of its lightweight web chat client for XMPP server.
XMPP-Web main window

XMPP-Web main window

XMPP Servers

  • ProcessOne is pleased to announce the bugfix release of ejabberd 26.02. Make sure to read the changelog for all the details and a complete list of fixes and improvements on this release.

XMPP Libraries & Tools

  • python-nbxmpp, a Python library that provides a way for Python applications to use the XMPP network, version 7.1.0 has been released. Full details on the changelog.
  • QXmpp, the cross-platform C++ XMPP client and server library, versions 1.13.1, 1.14.1, 1.14.2 and 1.14.3 have been released. Full details on the changelog.
  • Siltamesh, a simple bridge between Meshtastic and XMPP networks, version 0.2.0 has been released.
  • Slidge versions 0.3.7 has been released. You can check the intermediate changelog from 0.3.6 to 0.3.7 for all the details.
  • Slixmpp, the MIT licensed XMPP library for Python 3.7+ versions 1.13.0 and 1.13.2 have been released. You can read their respective official release announcements here and here for all the details.
  • xmpppy, a Python library that is targeted to provide easy scripting with Jabber, version 0.7.3 has been released. Full details on the changelog.

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs. Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • Link Metadata
    • This specification describes how to attach metadata for links to a message.

New

  • Version 0.1.0 of XEP-0510 (End-to-End Encrypted Contacts Metadata)
    • Accepted as Experimental by council vote (dg)
  • Version 0.1.0 of XEP-0511 (Link Metadata)
    • Accepted as Experimental by council vote (dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 1.26.0 of XEP-0001 (XMPP Extension Protocols)
    • Surface (and correct) the source control information.
    • Surface the publication URL (although I assume anyone reading this has figured that one out by now).
    • Surface the contributor side of things.
    • Add bit about XEP authors making PRs if they don’t exist - this is “new” rather than documenting existing practice.
    • Add bit about PRs getting XEP author approval (existing practice hithertofore undocumented).
    • Add bit about Council (etc) adding authors if they drop off (existing practice hithertofore undocumented).
    • Add note to clarify that Retraction doesn’t mean Deletion (existing practice, documented, but has been misunderstood before). (dwd)
  • Version 1.1.0 of XEP-0143 (Guidelines for Authors of XMPP Extension Protocols)
    • Reflect preference for GitHub pull requests for initial submission,
    • PRs to contain only one changed XEP. (dwd)
  • Version 0.8.0 of XEP-0353 (Jingle Message Initiation)
    • Adapt usage of JID types to real-world usage:
      • Send JMI responses to full JID of initiator instead of bare JID
      • Send JMI <finish/> element to full JID of both parties (melvo)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No XEPs last calls this month.

Stable

  • No stable XEPs this month.

Deprecated

  • No XEPs deprecated this month.

Rejected

  • No XEPs rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers and more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • Contributors:

    • To this issue: emus, cal0pteryx, Gonzalo Raúl Nemmi, Ludovic Bocquet, sokai, XSF iTeam
  • Translations:

    • French: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
    • Italian: Mario Sabatino, Roberto Resoli
    • Portuguese: Paulo

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF GitHub repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

For this newsletter either log in here and unsubscribe or simply send an email to newsletter-leave@xmpp.org. (If you have not previously logged in, you may need to set up an account with the appropriate email address.)

License

This newsletter is published under CC BY-SA license.

March 05, 2026 00:00

Newsletter febbraio 2026

Banner della newsletter XMPP

Banner della newsletter XMPP

Benvenuti nella newsletter XMPP, è un piacere avervi di nuovo qui! Questo numero copre il mese di febbraio 2026.

La Newsletter XMPP è offerta dal Team di comunicazione XSF.

Proprio come qualsiasi altro prodotto o progetto dell’XSF, la Newsletter è il risultato del lavoro volontario dei suoi membri e collaboratori. Se siete soddisfatti dei servizi e dei software che utilizzate, vi invitiamo a ringraziare o aiutare questi progetti!

Ti interessa contribuire al Team di comunicazione XSF? Per saperne di più, leggi in fondo alla pagina.

Annunci XSF

Logo XMPP in Font Awesome

Come annunciato nella nostra Newsletter di dicembre 2025, il logo ufficiale XMPP è ora incluso in Font Awesome dalla versione 7.2.0. Ed è fantastico!

Eventi XMPP

  • Materiale XMPP per DI.DAY: quattro ricette per utilizzare un protocollo di chat aperto al “Digital Independence Day”, l’iniziativa tedesca (DI.DAY) della comunità XMPP per consentire alle persone di iniziare a utilizzare la messaggistica indipendente!
Quattro ricette per un protocollo di chat aperto al Digital Independence Day!

Quattro ricette per un protocollo di chat aperto al Digital Independence Day!

  • XMPP Sprint a Berlino (DE / EN): si terrà da venerdì 19 a domenica 21 giugno 2026, presso gli uffici di Wikimedia Deutschland e.V. a Berlino, Germania. Se pensi che questo sia l’evento giusto per te, vieni a trovarci! Assicurati solo di registrarti qui, così sapremo quante persone parteciperanno e potremo organizzarci di conseguenza. Se hai domande o dubbi, unisciti a noi nella chatroom: sprints@muc.xmpp.org!

Video e conferenze

Articoli XMPP

Notizie sul software XMPP

Client e applicazioni XMPP

  • Conversations ha rilasciato le versioni 2.19.10 , 2.19.11 e 2.19.12 per Android. Questa versione introduce una scansione dei codici QR e una gestione degli URI rifattorizzate, una correzione per i problemi di rotazione in modalità tablet e offre anche la possibilità di eliminare i messaggi quando si banna qualcuno in un canale pubblico, tra le altre cose. È possibile consultare il changelog per tutti i dettagli.
  • Fluux Messenger, un client XMPP moderno e multipiattaforma per comunità e organizzazioni, versioni 0.11.3, 0.12.0, 0.12.1 , 0.13.0, 0.13.1 e 0.13.2 sono state rilasciate con una lista di aggiunte, nuove funzionalità, miglioramenti e correzioni di bug che è molto più lunga di quanto potremmo mai menzionare qui! Vai direttamente al changelog per tutti i dettagli!
Finestra principale di Fluux Messenger e console XMPP

Finestra principale di Fluux Messenger e console XMPP

  • Gajim ha rilasciato la versione 2.4.4 della sua app di chat gratuita e completa per XMPP. Questa versione include anteprime dei link, molti miglioramenti per macOS e correzioni di bug. Grazie a tutti per i vostri contributi! Potete consultare il changelog per tutti i dettagli.
Anteprime dei link in Gajim 2.4.4

Anteprime dei link in Gajim 2.4.4

  • Monal ha rilasciato le versioni 6.4.18 per iOS e macOS.
  • Monocles ha rilasciato le versioni 2.1.2 e 2.1.3 del suo client di chat per Android. La prima versione apporta correzioni per la revoca dei messaggi, le immagini inviate come link e la ricorsione infinita in TagEditorView, aggiunge il supporto per i link nei post, disabilita il pulsante di pubblicazione dopo il clic per impedire la doppia pubblicazione, rifattorizza l’interfaccia utente per la correzione dei messaggi e modifica il modello di accesso pubsub al feed social. L’ultima versione aggiunge la possibilità di mettere in pausa e riprendere la storia nella finestra di dialogo di eliminazione, corregge la gestione della barra di avanzamento e la ricerca dei contatti per le storie e apporta correzioni da Conversations, oltre a traduzioni aggiornate.
  • Profanity ha rilasciato la versione 0.16.0 del suo client XMPP basato su console scritto in C. Questa versione apporta correzioni per il rilevamento OTR, l’avvio di OMEMO, la sovrascrittura di nuovi account durante l’esecuzione di più istanze, la riconnessione quando non è stato ancora configurato alcun account, aggiunge un nuovo comando /changes che consente all’utente di confrontare le modifiche della configurazione di runtime e della configurazione salvata, oltre a molte altre correzioni e miglioramenti. Assicuratevi di leggere il changelog per tutti i dettagli!
  • xmpp-web ha rilasciato la versione 0.11.0 del suo client di chat web leggero per server XMPP.
Finestra principale di XMPP-Web

Finestra principale di XMPP-Web

Server XMPP

  • ProcessOne è lieta di annunciare il rilascio della versione bugfix di ejabberd 26.02. Assicuratevi di leggere il changelog per tutti i dettagli e un elenco completo delle correzioni e dei miglioramenti apportati in questa versione.

Librerie e strumenti XMPP

  • python-nbxmpp, una libreria Python che fornisce alle applicazioni Python un modo per utilizzare la rete XMPP, è stata rilasciata la versione 7.1.0. Tutti i dettagli sono disponibili nel changelog.
  • QXmpp, la libreria client e server XMPP multipiattaforma in C++, versioni 1.13.1 , 1.14.1, 1.14.2 e 1.14.3. Tutti i dettagli sono disponibili nel changelog.
  • Siltamesh, un semplice ponte tra Meshtastic e le reti XMPP, versione 0.2.0 è stato rilasciato.
  • Slidge versioni 0.3.7. È possibile consultare il changelog intermedio da 0.3.6 a 0.3.7 per tutti i dettagli.
  • Slixmpp, la libreria XMPP con licenza MIT per Python 3.7+ versioni 1.13.0 e 1.13.2 sono state rilasciate. È possibile leggere i rispettivi annunci ufficiali di rilascio qui e qui per tutti i dettagli.
  • xmpppy, una libreria Python pensata per facilitare lo scripting con Jabber, versione 0.7.3 è stata rilasciata. Tutti i dettagli sono disponibili nel changelog.

Estensioni e specifiche

La XMPP Standards Foundation sviluppa estensioni a XMPP nella sua serie XEP oltre alle XMPP RFC. Sviluppatori ed altri esperti di standard provenienti da tutto il mondo collaborano a queste estensioni, sviluppando nuove specifiche per pratiche emergenti e perfezionando i metodi esistenti. Proposte da chiunque, quelle di particolare successo finiscono per diventare definitive o attive, a seconda del loro tipo, mentre altre vengono accuratamente archiviate come differite. Questo ciclo di vita è descritto in XEP-0001, che contiene le definizioni formali e canoniche dei tipi, degli stati e dei processi. Maggiori informazioni sul processo di standardizzazione. La comunicazione relativa agli standard e alle estensioni avviene nella mailing list degli standard (archivio online).

Proposta

Il processo di sviluppo XEP inizia con la stesura di un’idea e la sua presentazione all’XMPP Editor. Entro due settimane, il Consiglio decide se accettare questa proposta come XEP sperimentale.

  • Metadati dei link
  • Questa specifica descrive come allegare metadati ai link in un messaggio.

Novità

  • Versione 0.1.0 di XEP-0510 (Metadati dei contatti crittografati end-to-end)
  • Accettato come sperimentale con voto del consiglio (dg)
  • Versione 0.1.0 di XEP-0511 (Metadati dei link)
  • Accettata come sperimentale con voto del consiglio (dg)

Rinviata

Se una XEP sperimentale non viene aggiornata per più di dodici mesi, verrà spostata da Sperimentale a Rinviata. Se ci sarà un altro aggiornamento, la XEP tornerà a essere Sperimentale.

  • Nessuna XEP rinviata questo mese.

Aggiornato

  • Versione 1.26.0 di XEP-0001 (Protocolli di estensione XMPP)

  • Mostrare (e correggere) le informazioni sul controllo del codice sorgente.

  • Mostrare l’URL di pubblicazione (anche se presumo che chiunque legga questo documento lo abbia già capito).

  • Mostrare il lato dei contributori.

  • Aggiungere un riferimento agli autori XEP che fanno PR se non esistono - questa è una “novità” piuttosto che una documentazione di una pratica esistente.

  • Aggiungere un riferimento alle PR che ottengono l’approvazione dell’autore XEP (pratica esistente finora non documentata).

  • Aggiungere un riferimento al Consiglio (ecc.) che aggiunge autori se questi abbandonano (pratica esistente finora non documentata).

  • Aggiungere una nota per chiarire che la ritrattazione non significa cancellazione (pratica esistente, documentata, ma che in passato è stata fraintesa). (dwd)

  • Versione 1.1.0 di XEP-0143 (Linee guida per gli autori dei protocolli di estensione XMPP)

  • Riflettere la preferenza per le richieste pull GitHub per l’invio iniziale,

  • Le PR devono contenere solo un XEP modificato. (dwd)

  • Versione 0.8.0 di XEP-0353 (Avvio di messaggi Jingle)

  • Adattare l’uso dei tipi JID all’uso nel mondo reale:

  • Inviare le risposte JMI al JID completo dell’iniziatore invece che al JID nudo

  • Inviare l’elemento JMI <finish/> al JID completo di entrambe le parti (melvo)

Ultima chiamata

Le ultime chiamate vengono emesse quando tutti sembrano soddisfatti dello stato attuale dello XEP. Dopo che il Consiglio ha deciso se lo XEP sembra pronto, l’XMPP Editor emette un’ultima chiamata per i commenti. Il feedback raccolto durante l’ultima chiamata può aiutare a migliorare lo XEP prima di restituirlo al Consiglio per il passaggio allo stato Stabile.

  • Nessun ultimo appello XEP questo mese.

Stabile

  • Nessun XEP stabile questo mese.

Obsoleto

  • Nessun XEP obsoleto questo mese.

Rifiutato

  • Nessun XEP rifiutato questo mese.

Diffondi la notizia

Condividi la notizia su altri social network:

Subscribe to the monthly XMPP newsletter
Subscribe

Dai un’occhiata anche al nostro feed RSS!

Cerchi offerte di lavoro o vuoi assumere un consulente professionista per il tuo progetto XMPP? Visita la nostra bacheca dei lavori XMPP.

Collaboratori e traduttori della newsletter

Questo è un progetto comunitario e desideriamo ringraziare i traduttori per il loro contributo. Volontari e altre lingue sono i benvenuti! Le traduzioni della newsletter XMPP saranno pubblicate qui (con un certo ritardo):

  • Collaboratori:

  • A questo numero: emus, cal0pteryx, Gonzalo Raúl Nemmi, Ludovic Bocquet, sokai, XSF iTeam

  • Traduzioni:

  • Francese: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau

  • Italiano: Mario Sabatino, Roberto Resoli

  • Portoghese: Paulo

Aiutaci a creare la newsletter

Questa newsletter XMPP è prodotta in collaborazione dalla comunità XMPP. Ogni mese la newsletter viene redatta in questo simple pad. Alla fine di ogni mese, il contenuto del pad viene inserito nel repository GitHub XSF. Siamo sempre lieti di accogliere nuovi collaboratori. Non esitate a partecipare alla discussione nella nostra chat di gruppo Comm-Team (MUC) e ad aiutarci così a sostenere questo progetto come sforzo comunitario. Avete un progetto e volete diffondere la notizia? Considerate la possibilità di condividere qui le vostre notizie o i vostri eventi e promuoveteli a un vasto pubblico.

Attività che svolgiamo regolarmente:

  • raccolta di notizie nell’universo XMPP
  • brevi sintesi di notizie ed eventi
  • sintesi della comunicazione mensile sulle estensioni (XEP)
  • revisione della bozza della newsletter
  • preparazione di immagini multimediali
  • traduzioni
  • comunicazione tramite account multimediali

Annullare l’iscrizione alla newsletter XMPP

Per questa newsletter, accedi qui e cancella l’iscrizione o invia semplicemente un’e-mail a newsletter-leave@xmpp.org. (Se non hai effettuato l’accesso in precedenza, potrebbe essere necessario creare un account con l’indirizzo e-mail appropriato).

Licenza

Questa newsletter è pubblicata con licenza CC BY-SA.

March 05, 2026 00:00

February 24, 2026

JMP

Google Wants to Control Your Device

Today we join with other organizations in signing the open letter to Google about their plans to require all Android app developers to register centrally with Google in order to distribute applications outside the Google Play Store.  You should go read the letter, it is quite well done. We want to talk a little bit additionally about why sideloading (aka installing apps on your own device, or “directly installing” as the letter puts it) is important to us

In early fall of 2024 Google Play decided to remove and ban the Cheogram app from their store.  Worse, since it had been listed by Play as “malware” Google’s Play Protect system began warning existing users that they should uninstall the existing app from their devices.  This was, as you might imagine, very bad for us.  No new customers could get the app and existing customers were contacting support unsure how to get back into their account after being tricked into removing it.

After a single submission to Google Play appealing this decision, they came back very quickly affirming “yes, this app seems to be malware.” No indication of why they thought that, just a decision. At this point the box we could use to submit new appeals also went away.  With no appeals process available and requests to what little support Google has going totally ignored, it was not clear if we were ever going to be able to distribute our app in the Play Store again.  After months of being delisted we finally got a lucky break.  In talking with some of our contacts at the Software Freedom Conservancy, they offered to write to some of their contacts deep inside Google and ask if there was anything that could be done. When their contact pushed internally at Google to get more information, suddenly the app was re-activated in the Play Store

I want to be clear here. We did not change the app. We did not upload a new build. This app Google had been so very, very sure was “malware” was fully reinstated and available to customers again the moment a human being bothered to actually look at it. It was of course obvious to any human looking at the app that it is not malware and so they restored it to the store immediately. They never replied to that final request, and no details about what happened were ever made available. From that point on Google has essentially pretended that this never happened and the app was always great and in the Play Store. If we had not been able to get in contact with a human and really push them to take a look, however, we would never have been reinstated.

Despite our good fortune, we still lost months of potential business over this.  Of course you’ve heard stories like this before. Stories of Play Store abuse are a dime a dozen and most of them don’t have the “happy” ending ours does. What does this have to do with “sideloading” and the open letter? Well, despite all the months of lost business, and despite all the existing customers being told to uninstall their app if they had got it from Play Store, we lost no more than 10% of our customers and continued to onboard new ones during the entire time.  How is this possible?  The main reason is direct installs (“sideloading”).  The majority of our customers get the app from our preferred sources on F-Droid and Itch. These customers were not told by Play Protect to remove their app. During the time we were delisted from Play Store we removed the link to Play Store from our website and new customers were instructed to use F-Droid. Of course we still lost some business here, some people were unable or unwilling to use F-Droid or other direct install options, but the damage was far, far less than it might have otherwise been.

What Google is proposing would allow them to ban anyone from creating apps which may be directly installed without their approval. One of the reasons they say they need to do this is to protect people from malware! Yet even if this was the narrow purpose of a ban it would still routinely catch apps which are not nefarious in any way, just as ours wasn’t. Furthermore, with all apps and developers registered in their system, a ban under these new rules could result in everyone being told to uninstall the app by Play Protect, and not just those who got it from Play Store to begin with. This would leave app developers who are erroneously marked by Google as malware with no options, no recourse, no way to appeal, and praying there is a friend of a friend who knows someone deep in Google who can poke the right button. This is just not an acceptable future for the world’s largest mobile platform.

by Stephen Paul Weber at February 24, 2026 20:06

February 12, 2026

ProcessOne

Fluux Messenger 0.13.0 - Native TCP Connection & Complete EU Language Coverage

Fluux Messenger 0.13.0 - Native TCP Connection & Complete EU Language Coverage

We&aposre excited to announce Fluux Messenger 0.13.0, featuring native TCP connections, complete European language coverage, and significant performance improvements.

Also, we recently passed the first 100 stars on GitHub. Thank you for your support and for believing in open, sovereign messaging !

Fluux Messenger 0.13.0 - Native TCP Connection & Complete EU Language Coverage

What&aposs New

Native TCP Connection Support on Desktop

Desktop users can now connect directly to XMPP servers via native TCP through our WebSocket proxy implementation. This means lower latency, better reliability, and native protocol handling. No more browser limitations.

We believe that&aposs a nice milestone worth a blog post. Until now, desktop users needed their XMPP server to support WebSocket connections. With v0.13.0, you can connect to any standard XMPP server. We estimate this will enable 80% of users who couldn&apost connect before to finally use Fluux Messenger with their existing servers.

Complete European Union Language Coverage

Fluux Messenger now supports all 26 EU languages, making it truly pan-European. From Bulgarian to Swedish, Croatian to Maltese, we&aposve got you covered. Languages include:

  • Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Icelandic, Irish, Italian, Latvian, Lithuanian, Maltese, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish.

Dynamic locale loading means faster initial startup while maintaining comprehensive language support.

If you spot any translation issues, feel free to contribute on our GitHub repository.

Clipboard Image Paste

Paste images directly from your clipboard with Cmd+V (macOS) or Ctrl+V (Windows/Linux). Copy from anywhere, paste into Fluux. It (should) just work ;). Tested and confirmed with Safari&aposs "Copy Image" feature and system clipboard operations so far.

Clear Local Data on Logout

New privacy option to completely clear local data when logging out. Perfect for shared devices or when you need a fresh start.

Performance & Reliability Improvements

  • Smarter Message History Loading - We&aposve completely redesigned our Message Archive Management (MAM) strategy. Message history now loads intelligently based on your scrolling behavior and available data, reducing unnecessary server requests.

  • Better Resource Management - Fixed duplicate avatar fetches when hashes haven&apost changed, reducing bandwidth usage and improving profile picture loading times.

  • Rock-Solid Scroll Behavior - Media loading no longer disrupts your scroll position. The scroll-to-bottom feature now works reliably, even when images and files are loading.

  • Better Windows Tray - Improved tray behavior on Windows for a more native experience.

macOS Sleep Recovery - Fixed layout corruption that could occur after your Mac woke from sleep.

UI & UX Polish

  • Consistent attachment styling across light and dark themes
  • Fixed sidebar switching with Cmd+U keyboard shortcut
  • Improved new message markers - position correctly maintained when switching conversations
  • Better context menus - always stay within viewport bounds, no more cut-off menus
  • Markdown preview accuracy - bold and strikethrough now properly shown in message previews

Linux Packaging

Improved Linux packaging using native distribution tools for better integration with your system package manager.

Developer Experience

Centralized notification state with viewport observer provides better performance and more reliable notification handling across the application.


Get Fluux Messenger

Download for Windows, macOS, or Linux in the latest Release page.

Source code is available at: GitHub


Your messages, your infrastructure : no vendor lock-in.
Sovereign by design. Built in Europe, for everyone.

by Adrien at February 12, 2026 17:39

February 11, 2026

ProcessOne

🚀 ejabberd 26.02

🚀 ejabberd 26.02

Contents:

ChangeLog

  • Fixes issue with adding hats data in presences send by group chats (#4516)
  • Removes mod_muc_occupantid modules, and integrates its functionality directly into mod_muc (#4521)
  • Fixes issue with reset occupant-id values after restart of ejabberd (#4521)
  • Improves handling of mediated group chat invitations in mod_block_stranger (#4523)
  • Properly install mod_invites templates in make install call (#4514)
  • Better errors in mod_invites (#4515)
  • Accessibility improvements in mod_invites (#4524)
  • Improves handling of request with invalid url encoded values in request handled by ejabberd_http
  • Improves handling of invalid responses to disco queries in mod_pubsub_serverinfo
  • Fixes conversion of MUC room configs from ejabberd older than 21.12
  • Fixes to autologin in WebAdmin

If you are upgrading from a previous version, there are no changes in SQL schemas, configuration, API commands or hooks.

Notice that mod_muc now incorporates the feature from mod_muc_occupantid, and that module has been removed. You can remove mod_muc_occupantid in your configuration file as it is unnecessary now, and ejabberd simply ignores it.

Check also the commit log: https://github.com/processone/ejabberd/compare/26.01...26.02

Acknowledgments

We would like to thank the contributions to the source code and translations provided by:

And also to all the people contributing in the ejabberd chatroom, issue tracker...

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get the following change:

  • Change default_ram_db from mnesia to p1db when using p1db cluster_backend

ejabberd 26.02 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues.

by Jérôme Sautret at February 11, 2026 10:01

February 08, 2026

Mathieu Pasquet

slixmpp v1.13.2 (and .1)

Version 1.13.0 has shipped with a packaging bug that affects people trying to build wheels using setuptools. 1.13.1 is an attempt to fix that (made packaging from git work somehow), and 1.13.2 is the correct fix thanks to one single additional character.

There are no other changes, and pypi wheels are not affected because they are built in CI with uv.

by mathieui at February 08, 2026 19:02

Sam Whited

DJing for Contra and Lindy

Last month a friend invited me to trade songs with him DJing at an Atlanta Lindy Hop social dance. This was my first time DJing for a social dance other than Contra, and I was surprised by what a different experience it was. To that end, this post will be a mix of post-mortem as I’ve done for contra dances, but also a reflection on the differences between DJing for called and non-called social dances.

Prep

Preparation was the first, and perhaps most obvious, place where the two types of DJing vastly differed.

Preparing for Contra requires hours of sorting new tracks, creating mixes, adding hot cues, fixing beat grids, and analyzing song structure. Once the tracks are all analyzed and annotated, I can then create a playlist of mixes and start practicing them.

For Lindy, on the other hand, I can sit down and prepare a basic set in a few hours right before the dance. Sorting any new tracks still needs to be done, but otherwise I mostly don’t have to do any work other than possibly adding a cue point to let me know where to start a track on the rare version where the beginning doesn’t work (ie. live recordings with lots of chatter or applause before and after), and even that isn’t really necessary.

For Contra I normally make a play list with all my mixes in an order that I think will work well, then shuffle that around on the day of the event depending on the dance picked by the caller. For Lindy I chose to make a crate (which differs from a playlist in that there can’t be duplicate tracks and there is no order to the tracks) instead, pull in about 50 tracks (far more than I’d need, especially when trading songs with another DJ) and largely pull from those just to make selection quicker, while still dipping into the rest of my library on occasion when nothing I’d pre-selected fits the current vibe.

Nerves

When a dancer asks if I get nervous before DJing for contra I confidently say “no”. Not out of some toxic sense of bravado or machismo (I hope), or because I think I’m particularly good at it and don’t need to worry (I’m not, and I do), but because I know I’m going to make mistakes and I’m okay with that.

Instead of being nervous I tend to go into problem solving mode:

  • “How did I get the dancers 8 beats off the phrase?”
  • “Can I jump back to an earlier point in the song to give us more time without it sounding bad?”
  • “I haven’t practiced this mix enough, focus on the cue out point coming up.”
  • etc.

This apparently isn’t the case when I DJ Lindy. The first time, I was immediately a nervous wreck. This is likely particular to me, of course, but it was still an interesting difference that I wasn’t expecting given how much more technically difficult Contra DJing is. I knew intellectually that the technical side of doing a mix (ie. beat matching, harmonic mixing, transitioning between tunes, etc.) isn’t the important part of DJing, the finesse of picking tracks that the crowd will like is what matters. A DJ who just plays a few contra songs, but chooses them well is a much better crowd pleaser than one who does impressive mixing but picks bad songs. However, when it came right down to it I was still surprised by how much more nerve wracking it was when that’s the only thing you’re being judged on.

At some point I remembered that it’s just a dance and if one of them is bad or doesn’t work, we’ll move on and play another song. If I can’t find something, playing something at random is fine too (not ideal, but fine). Once I accepted this I almost immediately started picking songs quicker and not having to rely on picking something random at the last minute after all. It’s much easier to focus on what will be good when you’re not worried about whether it will be perfect. After that we were able to cover for each other and play off each other, and it made the evening much more enjoyable. If one of us killed the floor, the other would try a different style and bring it back. Doug covered for me, and I was even able to cover for Doug once or twice and revive a somewhat empty floor! None of this is something I’d have to deal with in Contra where my mixes are pre-selected, practiced, and where I’m picking from a much smaller selection of tunes.

Playing to the Dancers

While Lindy may not require the technical skills used when mixing tunes in the same way that Contra does, the dancers also aren’t pre-lined-up as they would be with a Contra or Square dance. If you play a tune no one likes at Contra most everyone is stuck with it for 7 minutes or so until the caller wraps up the dance. The dance may fall a part entirely if the tune is too fast or doesn’t have clear enough phrasing for the dancers or caller to follow along, but generally speaking as long as the track sort of works the dancers will dance to it—and they’ll probably enjoy themselves. There are also fewer dances in an evening, so I suspect dancers feel more compelled to dance every single dance.

With Lindy that’s not the case. Many dancers will wait to hear if your track selection is one that they like before venturing out onto the dance floor, and even those that aren’t deliberately checking the music may not be as inclined to break off their conversations with a friend to go ask someone to dance if the track doesn’t immediately catch their interest.

There is also a several minute break between each tune in a contra dance where the caller teaches the next pre-choreographed dance. This means that if two back-to-back mixes have nothing to do with each other, or sound completely different, no one notices as long as you’ve picked a mix that goes well with the dance. Whereas with Lindy only a few seconds elapse between songs, just enough time for someone to thank their partner and go ask the next person to dance. This means that you have to consider the previous track when selecting the next one: a leap from fast balboa to slow blues is going to be jarring for the dancers and they may choose not to dance. Changing the style requires either gradually shifting between the two over several tracks, or maybe giving a slightly longer pause between songs to let the dancers get the previous style out of their ears and bodies.

Similarly there is a big difference in what music really gets the dancers blood pumping, though I suspect this is specific to these two venues and not to the type of social dancing as a whole.

With the contra venue I DJ for it’s mostly a younger crowd and they’re mostly used to hearing a hand full of local old-time string bands and the occasional high energy (but still traditional) New England style dance band. The contra chestnuts are an important part of the dances history, and sometimes you play them, but they don’t get many people excited.

This means that I can do two things if the energy is feeling low: I can play a track by a band they normally wouldn’t be able to hear that’s a bit more modern sounding, or otherwise has something different and interesting about it, or I can mix in a pop tune they’re familiar with. This is almost like a cheat code: if the energy is low, play a song they know and they’ll get excited and raise the energy of the floor.

With Lindy it’s not quite as simple as that. The reliable floor savers for Lindy are mostly old chestnuts by some of the jazz greats. Modern swing is sometimes played, but sparingly, and pop songs are a definite “no”. People come to the dance expecting a certain style, and they’re unhappy if you don’t stick to it. You have to work within the constraints of the genre, and picking floor savers is much more subtle work that requires carefully watching the dancers and seeing what will make them take to the dance floor on any particular night.

The act of watching the floor and adjusting the set as necessary may be obvious to club DJs and other social dance DJs, but to me it was a new experience, and one that I initially found somewhat paralyzing. For the first few tracks, my friend Doug had to cover for me and play as if he were the solo DJ while I flailed trying to find a track that I thought would work. I hadn’t fully internalized until that moment that in a Contra dance the DJ is picking the music to match the dance, but in Lindy Hop the dancers are picking the dance to match the music, and this may include not dancing entirely! I let the perfect become the enemy of the good and Doug had to pick several tracks in a row even though in theory we were trading songs 1 for 1.

Genre and Form

Sometime after the half I had mostly gotten over my nerves, and most of the newbies had drifted off home. I felt more comfortable trying a few experimental tracks that targeted primarily the more experienced late-night crowd: one a fast blues dance recorded at a legendary local Blues club, and the other two folksy tracks by bands that mostly play contra dances.

I was a bit nervous about these tracks as I didn’t know how they’d be received at this particular dance where the DJs play almost exclusively 30s and 40s jazz and the most modern bands that get any air time tend to be emulating the style of the jazz greats. Luckily they all went over well and filled the dance floor! A few people switched to blues dancing for “Sweet Betty”, while others continued with Lindy. Even better, for “Rhinoceros for Sue” the head DJ for the organization (who schedules everyone else, and therefore was the person to try and impress during the evening) went out on the floor to do some Balboa and came over afterwards and asked me what the track was!

I mention this because this is both a similarity and a difference from Contra. Techno-contra1 excluded, I largely can’t play anything for a contra dance that’s not in strict 32-bar “AABB” form. I can layer a modern beat or a pop song over a traditional contra tune, or maybe even find pop tunes that more or less stick to contra form and play it alone, but the form has to be there for the dancers (and some callers) to be comfortable.

With Lindy the freedom to play tracks with a wider variety of forms, so long as they respect the history of the dance, was a nice change of pace. That said, mostly it still needs to have a swung beat and I’m more at the mercy of what the dancers like (which is a narrower subset of music in the Lindy scene in my experience, as previously mentioned), so maybe this is more of a similarity than a difference.

Conclusion

Like learning to dance both lead and follow roles (in Lindy), or from either side of the minor set (in Contra): having done two different forms of DJing will, I suspect, make me a better DJ for either type of dance. I really enjoyed DJing for Lindy and was delighted when the head DJ asked if I’d like to start doing it regularly, hopefully I’ll be able to do it more often going forward!

If you’re curious about the set Doug and I ended up playing, the final set list for the evening can be found on Musicbrainz.


  1. contra set to pop music, often with glow sticks and blacklights. Mostly it has no relation to techno music though some techno may be used. Here the dancers and callers often aren’t expecting strict contra form. ↩︎

February 08, 2026 12:00

The XMPP Standards Foundation

The XMPP Newsletter January 2026

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of January 2026.

The XMPP Newsletter is brought to you by the XSF Communication Team.

Just like any other product or project by the XSF, the Newsletter is the result of the voluntary work of its members and contributors. If you are happy with the services and software you may be using, please consider saying thanks or help these projects!

Interested in contributing to the XSF Communication Team? Read more at the bottom.

XSF Announcements

Call for XSF Membership

If you are interested in joining the XMPP Standards Foundation as a member, please apply before February 15th, 2026, 00:00 UTC. Being a member signals a commitment to open standards and professional engagement in / with the XMPP community. Here, your membership helps position the XSF as a healthy organization, which in itself is valuable. It also grants voting rights on technical and administrative matters within the XSF. The application is a light-weight and free of cost process and you can use membership to get more involved more easily, too.

XMPP Summit 28

The XSF held its 28th XMPP Summit during January 29th and 30th 2026 in Brussels (Belgium, Europe). During this two-day gathering, we discussed XMPP protocol development topics and kept making progress on current issues within the protocol and ecosystem. We would like to thank everyone that took part in the Summit for their continuous commitment and contribution to the XSF and all the XMPP related projects!

The XSF would like to extend a special thank you to those who made this XMPP Summit possible:

  • Edward Maurer from the XSF Communication Team as well as Daniel Gultsch and Guus der Kinderen from the XSF Summits, Conferences, and Meetups Team for their time, resources, strong commitment, thorough contribution and attention to detail in the organization and moderation of the event.
  • Ralph Meijer, Dan Caseley and Edwin Mons for their time and dedicated work on streaming the Summit.
  • Ralph Meijer for organising the XSF dinner and Alexander Gnauck’s noted sponsor contribution.
  • Additional thanks to mathieui, Rémi and other unknown people helping to keep track of the notes during the event.
Welcome to the 28th XMPP Summit!

Welcome to the 28th XMPP Summit!

A summary of the main topics discussed is planned to be published soon at xmpp.org.

XMPP at FOSDEM 2026

During January 31st and February 1st, the XSF was present at FOSDEM 26 in Brussels, Belgium. The XMPP community took part in the Realtime Lounge, a room located in Level 1, AW building, together with the Prosody IM and the Snikket projects, where several open source projects around the Decentralised Communication Devroom can present themselves.

We are pleased to say that there was a lot of interaction at the XMPP booth! A rather large number of FOSDEM visitors had the opportunity to come say “Hi!”, meet, interact, talk and have interesting conversations with many of the developers of the most popular clients, servers, tools and libraries that power the whole XMPP ecosystem and bring it to life.

In addition to the activities that took place on the XMPP booth, Daniel Gultsch and S1m, Jérôme Sautret, Timothée Jaussoin, and Özcan Oğuz hosted four different XMPP related presentations from the Decentralised Communication Developer Room and the FOSS on Mobile track.

You can watch the presentations from the following list of links:

And, of course .. we had plenty of leaflets, informational material, and as always: the coolest stickers! ;)

We found Wally. Now, can you find the XMPP booth in the picture?

We found Wally. Now, can you find the XMPP booth in the picture?

XSF Bluesky account is verified now (New handle!)

The XSF Bluesky account is verified now. This means that the profile handle is different now (@xmpp.org) You can find the profile with its new handle via https://bsky.app/profile/xmpp.org. Many thanks to cal0pteryx from the XSF Communication Team as well as singpolyma and Zash from Infrastructure Team.

Events

XMPP listed as Alternative Chat at DI.DAY Initiative

The German initiative ‘Digital Independence Day’ (DI.DAY) has been kicked off this year to enable users to migrate to open-source software alternatives in various contexts. Besides other services, XMPP is listed as an alternative chat option and XMPP Community members have created so-called switch recipes: Digital Independence Day. Find the related blogpost at xmpp.org

There are more related activities and resources available from the XMPP Community:

XMPP Articles

XMPP Software News

XMPP Clients and Applications

  • Conversations has released versions 2.19.8 and 2.19.9 for Android. These versions introduce a fix for calls getting stuck at connecting when ‘Use Relays’ is enabled but server doesn’t have any. They also come with bandwidth optimizations and they combine QR code related actions (show, scan, invite) into one central menu. You can take a look at the changelog for all the details.
  • Gajim has released versions 2.4.2 and 2.4.3 of its free and fully featured chat app for XMPP. Installing Gajim on macOS is now only a single click away. Gajim 2.4.2 release brings a simplified macOS setup, easier sharing for files and support for link previews, along many other improvements, changes and some important bug fixes. You can take a look at the changelog for all the details.
  • Kaidan has released versions 0.14.0 and 0.15.0 of its user-friendly and modern chat app for XMPP. The former release brings support for advanced media sharing and registration provider filtering, while the latter implements an integrated search field and experimental support for Audio/Video Calls (with most of the work being funded by NLnet via NGI Zero Entrust and NGI Zero Commons Fund with public money provided by the European Commission) in addition to some very useful improvements and lots of fixes! You can find a detailed list of new features, bugfixes and notes in their respective release announcements, or the changelog.
Kaidan 0.15.0: Experimental Audio/Video support on Linux.

Kaidan 0.15.0: Experimental Audio/Video support on Linux.

  • Monal has released versions 6.4.17 for iOS and macOS.
  • Monocles has released version 2.1 of its chat app for Android. This a huge update with three fundamental new features: Stories, Feeds, and Phone log. In addition to many other improvements and features such as account sorting and improved message deletion, this update also brings new support for multiple XEPs. Thanks to the standardization of XMPP, it is now possible to have social interaction across different XMPP platforms and messengers. These new features bring more functions that are fully compatible with the XMPP web platform Movim.
Create your stories today and make a post for your contacts in Monocles Chat or Movim!

Create your stories today and make a post for your contacts in Monocles Chat or Movim!

XMPP Servers

  • Prosody IM is pleased to announce versions 13.0.3 and 13.0.4, both minor releases of the stable branch. The former comes with a range of tweaks, bug fixes and minor improvements, and the latter being the encouraged upgrade, partly due to a bug that was introduced into UUID generation in the previous release. Although not strictly bug fixes, some configuration-related improvements that help make configuring Prosody a little easier and more reliable also made their way into the latest release. Read all the details on the changelog, and as always, detailed download and install instructions are available on the download page for your convenience.
  • MongooseIM has released MongooseIM 6.5: Open for Integration. This release focuses on easier integration with your applications while continuing to deliver a scalable and reliable XMPP-based messaging server. The most important improvement in MongooseIM 6.5.0 is the production-ready integration with RabbitMQ, allowing external services to process the events from the server. It is worth noting that the mechanism is highly extensible – you can craft such extensions yourself.
  • ProcessOne is pleased to announce the release of ejabberd 26.01. This release addresses real operational pain points: export your data from one database backend and import it into another, and roster invites and invite-based account registration to let your users invite others without opening the gates to spam! Make sure to read the changelog for all the details and a complete list of changes, new features, fixes and improvements on this release.

XMPP Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs. Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 1.1.0 of XEP-0386 (Bind 2)
    • It’s authorization-identifier not authorization-identity (dg)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No XEPs last calls this month.

Stable

  • No stable XEPs this month.

Deprecated

  • No XEPs deprecated this month.

Rejected

  • No XEPs rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers and more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • Contributors:

    • To this issue: emus, cal0pteryx, Gonzalo Raúl Nemmi, Ludovic Bocquet, XSF iTeam
  • Translations:

    • French: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
    • Italian: Mario Sabatino, Roberto Resoli
    • Portuguese: Paulo

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF GitHub repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

For this newsletter either log in here and unsubscribe or simply send an email to newsletter-leave@xmpp.org. (If you have not previously logged in, you may need to set up an account with the appropriate email address.)

License

This newsletter is published under CC BY-SA license.

February 08, 2026 00:00