Planet Jabber

November 15, 2024

The XMPP Standards Foundation

MongooseIM 6.3 - Monitor with Prometheus, scale up with CockroachDB

MongooseIM is a scalable, efficient, high-performance instant messaging server. At Erlang Solutions, we believe that it is essential to use the right tool for the job, and this is why the server implements the proven, open, and extensible XMPP protocol, which was designed for instant messaging from the beginnning. Thanks to the inherent flexibility of XMPP, MongooseIM is very versatile and has a variety of applications. Being specified in RFC and XEP documents, the protocol ensures compatibility with other software as well, including multiple clients and libraries. Similarly to the protocol, we have chosen the Erlang programming language, because it was designed with the intention of handling large numbers of parallel connections - which is the exact case in a messaging server.

With each new version, we introduce new features and improvements. For example, version 6.2.0 introduced our new CETS in-memory storage, making setup and autoscaling in cloud environments easier than before (see the blog post for details). The latest release 6.3.0 is no exception. The main highlight is the complete instrumentation rework, allowing seamless integration with modern monitoring solutions like Prometheus. Additionally, we have added CockroachDB to the list of supported databases, so you can now let this highly scalable database grow with your applications while avoiding being locked into your cloud provider.

Observability and instrumentation

In software engineering, observability is the ability to gather data from a running system in order to figure out what is going inside: is it working as expected? Does it have any issues? How much load is it handling, and could it do more? There are many ways to improve the observability of a system, and one of the most important is instrumentation. Just like adding extra measuring equipment to a physical system, this means adding extra code to the software. It allows the system administrator to observe the internal state of the system. This comes with a price. There is more work for the developers, increased complexity, and potential performance degradation caused by the collection and processing of additional data.

However, the benefits usually outweigh the costs, and the ability to inspect the system is often a critical requirement. It is also worth noting that the metrics and events gathered by instrumentation can be used for further automation, e.g. for autoscaling or sending alarms to the administrator.

Instrumentation in MongooseIM

Even before our latest release of MongooseIM, there have been multiple means to observe its behaviour:

Metrics provide numerical values of measured system properties. The values change over time, and the metric can present current value, sum from a sliding window, or a statistic (histogram) of values from a given time period. Prior to version 6.3, MongooseIM used to store such metrics with the help of the exometer library. To view the metrics, one had to configure an Exometer exporter, which would periodically send the metrics to an external service using the Graphite protocol. Because of the protocol, the metrics would be exported to Graphite or InfluxDB version 1. One could also query a limited subset of metrics using our GraphQL API (or the legacy REST API) or with the command line interface. Alternatively, metrics could be retrieved from the Erlang shell of a running MongooseIM node.

Logs are another type of instrumentation present in the code. They inform about events occurring in the system and since version 4, they are events with extensible map-like structure and can be formatted e.g. as plain text or JSON. Subsequently, they can be shown in the console or stored in files. You can also set up a log management system like the Elastic (ELK) Stack or Splunk – see the documentation for more details.

Prior to version 6.3.0, the instrumented code needed to separately call the log and metric API. Updating a metric and logging an event required two distinct function calls. Moreover, if there were multiple metrics (e.g. execution time and total number of calls), there would be multiple function calls required. The main issue of this solution was however the hardcoding of Exometer as the metric library and the limitation of the Graphite protocol used to push the metrics to external services.

Instrumentation rework in MongooseIM 6.3

The lack of support for the modern and widespread Prometheus protocol was one of the main reasons for the complete rework of instrumentation in version 6.3, which is summarised in the following diagram:

The most noticeable difference is that in the instrumented code, there is just one event emitted. Such an event is identified by its name and a key-value map of labels and contains measurements (with optional metadata) organised in a key-value map. Each event has to be registered before its instances are emitted with particular measurements. The point of this preliminary step is not only to ensure that all events are handled but also to provide additional information about the event, e.g. the measurement keys that will be used to update metrics. Emitted events are then handled by configurable handlers. Currently, there are three such handlers. Exometer and Logger work similarly as before, but there is a new Prometheus handler as well, which stores the metrics internally in a format compatible with Prometheus and exposes them over an HTTP API. This means that any external service can now scrape the metrics using the Prometheus protocol. The primary case would be to use Prometheus for metrics collection, and a graphical tool like Grafana for display. If you however prefer InfluxDB version 2, you can easily configure a scraper, which would periodically put new data into InfluxDB.

Apart from supporting the Prometheus protocol, additional benefits of the new solution include easier configuration, extensibility, and the ability to add more handlers in the future. You can also have multiple handlers enabled simultaneously, allowing you to gradually change your metric backend from Exometer to Prometheus. Conversely, you can also disable all instrumentation, which was not possible prior to version 6.3. Although it might make little sense at first glance, because it can render the system a black box, it can be useful to gain extra performance in some cases, e.g. if the external metrics like CPU usage are enough, in case of an isolated embedded system, or if resources are very limited.

There are more options possible, and you can find them in the documentation. You can also find more details and examples of instrumentation in the detailed blog post.

CockroachDB – a database that scales with MongooseIM

MongooseIM works best when paired with a relational database like PostgreSQL or MySQL, enabling easy cluster node discovery with CETS and persistent storage for users’ accounts, archived messages and other kinds of data. Although such databases are not horizontally scalable out of the box, you can use managed solutions like Amazon Aurora, AlloyDB or Azure Cosmos DB for PostgreSQL. The downsides are the possible vendor lock-in and the fact that you cannot host and manage the DB yourself. With version 6.3 however, the possibilities are extended to CockroachDB. This PostgreSQL-compatible distributed database can be used either as a provider-independent cloud-based solution or as an internally hosted cluster. You can instantly set it up in your local environment and take advantage of the horizontal scalability of both MongooseIM and CockroachDB. If you want to learn how to deploy both MongooseIM and CockroachDB in Kubernetes, see the documentation for CockroachDB and the Helm chart for MongooseIM, together with our recent blog post about setting up an auto-scalable cluster. If you are interested in having an auto-scalable solution deployed for you, please consider our MongooseIM Autoscaler.

What’s next?

You can read more about MongooseIM 6.3 in the detailed blog post. We also recommend visiting our product page to see the possible options of support and the services we offer. You can also try the server out at trymongoose.im.

Read about Erlang Solutions as sponsor of the XSF.

November 15, 2024 00:00

November 14, 2024

Erlang Solutions

MongooseIM 6.3: Prometheus, CockroachDB and more

MongooseIM is a scalable, efficient, high-performance instant messaging server using the proven, open, and extensible XMPP protocol. With each new version, we introduce new features and improvements. For example, version 6.2.0 introduced our new CETS in-memory storage, making setup and autoscaling in cloud environments easier than before (see the blog post for details). The latest release 6.3.0 is no exception. The main highlight is the complete instrumentation rework, allowing seamless integration with modern monitoring solutions like Prometheus. 

Additionally, we have added CockroachDB to the list of supported databases, so you can now let this highly scalable database grow with your applications while avoiding being locked into your cloud provider.

Observability and instrumentation

In software engineering, observability is the ability to gather data from a running system to figure out what is going inside: is it working as expected? Does it have any issues? How much load is it handling, and could it do more? There are many ways to improve the observability of a system, and one of the most important is instrumentation. Just like adding extra measuring equipment to a physical system, this means adding additional code to the software. It allows the system administrator to observe the internal state of the system. This comes with a price. There is more work for the developers, increased complexity, and potential performance degradation caused by the collection and processing of additional data.

However, the benefits usually outweigh the costs, and the ability to inspect the system is often a critical requirement. It is also worth noting that the metrics and events gathered by instrumentation can be used for further automation, e.g. for autoscaling or sending alarms to the administrator.

Instrumentation in MongooseIM

Even before our latest release of MongooseIM, there have been multiple means to observe its behaviour:

Metrics provide numerical values of measured system properties. The values change over time, and the metric can present current value, sum from a sliding window, or a statistic (histogram) of values from a given time period. Prior to version 6.3, MongooseIM used to store such metrics with the help of the exometer library. To view the metrics, one had to configure an Exometer exporter, which would periodically send the metrics to an external service using the Graphite protocol. Because of the protocol, the metrics would be exported to Graphite or InfluxDB version 1. One could also query a limited subset of metrics using our GraphQL API (or the legacy REST API) or with the command line interface. Alternatively, metrics could be retrieved from the Erlang shell of a running MongooseIM node.

Logs are another type of instrumentation present in the code. They inform about events occurring in the system and since version 4, they are events with extensible map-like structure and can be formatted e.g. as plain text or JSON. Subsequently, they can be shown in the console or stored in files. You can also set up a log management system like the Elastic (ELK) Stack or Splunk – see the documentation for more details.

The diagram below shows how these two types of instrumentation can work together:

The first observation is that the instrumented code needs to separately call the log and metric API. Updating a metric and logging an event requires two distinct function calls. Moreover, if there are multiple metrics (e.g. execution time and total number of calls), there would be multiple function calls required. There is potential for inconsistency between metrics, or between metrics and logs, because an error could happen between the function calls. The main issue of this solution is however the hardcoding of Exometer as the metric library and the limitation of the Graphite protocol used to push the metrics to external services.

Instrumentation rework in MongooseIM 6.3

The lack of support for the modern and widespread Prometheus protocol was one of the main reasons for the complete rework of instrumentation in version 6.3. Let’s see the updated diagram of MongooseIM instrumentation:

The most noticeable difference is that in the instrumented code, there is just one event emitted. Such an event is identified by its name and a key-value map of labels and contains measurements (with optional metadata) organised in a key-value map. Each event has to be registered before its instances are emitted with particular measurements. The point of this preliminary step is not only to ensure that all events are handled but also to provide additional information about the event, e.g. the measurement keys that will be used to update metrics. Emitted events are then handled by configurable handlers. Currently, there are three such handlers. Exometer and Logger work similarly as before, but there is a new Prometheus handler as well, which stores the metrics internally in a format compatible with Prometheus and exposes them over an HTTP API. This means that any external service can now scrape the metrics using the Prometheus protocol. The primary case would be to use Prometheus for metrics collection, and a graphical tool like Grafana for display. If you however prefer InfluxDB version 2, you can easily configure a scraper, which would periodically put new data into InfluxDB.

As you can see in the diagram, logs can be also emitted directly, bypassing the instrumentation API. This is the case for multiple logs in the system, because often there is no need for any metrics, and a log message is enough. In the future though, we might decide to fully replace logs with instrumentation events, because they are more extensible.

Apart from supporting the Prometheus protocol, additional benefits of the new solution include easier configuration, extensibility, and the ability to add more handlers in the future. You can also have multiple handlers enabled simultaneously, allowing you to gradually change your metric backend from Exometer to Prometheus. Conversely, you can also disable all instrumentation, which was not possible prior to version 6.3. Although it might make little sense at first glance, because it can render the system a black box, it can be useful to gain extra performance in some cases, e.g. if the external metrics like CPU usage are enough, in case of an isolated embedded system, or if resources are very limited.

The table below compares the legacy metrics solution with the new instrumentation framework:

SolutionLegacy: mongoose_metricsNew: mongoose_instrument
Intended useMetricsMetrics, logs, distributed tracing, alarms, …
Coupling with handlersTight: hardcoded Exometer logic, one metric update per function callLoose: events separated from configurable handlers
Supported handlersExometer is hardcodedExometer, Prometheus, Log
Events identified byExometer metric name (a list)Event name, Labels (key-value map)
Event valueSingle-dimensional numerical valueMulti-dimensional measurements with metadata
Consistency checksNone – it is up to the implementer to verify that the correct metric is created and updatedPrometheus HTTP endpoint, legacy GraphQL / CLI / REST for Exometer
APIGraphQL / CLI and RESTPrometheus HTTP endpoint,legacy GraphQL / CLI / REST for Exometer

There are about 140 events in total, and some of them have multiple dimensions. You can find an overview in the documentation. In terms of dashboards for tools like Grafana, we believe that each use case of MongooseIM deserves its own. If you are interested in getting one tailored to your needs, don’t hesitate to contact us.

Using the instrumentation

Let’s see the new instrumentation in action now. Starting with configuration, let’s examine the new additions to the default configuration file:

[[listen.http]]
  port = 9090
  transport.num_acceptors = 10

  [[listen.http.handlers.mongoose_prometheus_handler]]
    host = "_"
    path = "/metrics"

(...)

[instrumentation.prometheus]

[instrumentation.log]

The first section, [[listen.http]], specifies the Prometheus HTTP endpoint. The following [instrumentation.*] sections enable the Prometheus and Log handlers with the default settings – in general, instrumentation events are logged on the DEBUG level, but you can change it. This configuration is all you need to see the metrics at http://localhost:9091/metrics when you start MongooseIM.

As a second example, let’s say that you want only the Graphite protocol integration. In this case, you might configure MongooseIM to use only the Exometer handler, which would push the metrics prefixed with mim to the influxdb1 host every 60 seconds:

[[instrumentation.exometer.report.graphite]]
  interval = 60_000
  prefix = "mim"
  host = "influxdb1"

There are more options possible, and you can find them in the documentation.

Tracing – ad-hoc instrumentation

There is one more type of observability available in Erlang systems, which is tracing. It enables a user to have a more in-depth look into the Erlang processes, including the functions being called and the internal messages being exchanged. It is meant to be used by Erlang developers, and should not be used in production environments because of the impact it can have on a running system. It is good to know, however, because it could be helpful to diagnose unusual issues. To make tracing more user-friendly, MongooseIM now includes erlang_doctor with some MongooseIM-specific utilities (see the tr_util module). This tool provides low-level ad-hoc instrumentation, allowing you to instrument functions in a running system, and gather the resulting data in an in-memory table, which can be then queried, processed, and – if needed – exported to a file. Think of it as a backup solution, which could help you diagnose hidden issues, should you ever experience one.

CockroachDB – a database that scales with MongooseIM

MongooseIM works best when paired with a relational database like PostgreSQL or MySQL, enabling easy cluster node discovery with CETS and persistent storage for users’ accounts, archived messages and other kinds of data. Although such databases are not horizontally scalable out of the box, you can use managed solutions like Amazon Aurora, AlloyDB or Azure Cosmos DB for PostgreSQL. The downsides are the possible vendor lock-in and the fact that you cannot host and manage the DB yourself. With version 6.3 however, the possibilities are extended to CockroachDB. This PostgreSQL-compatible distributed database can be used either as a provider-independent cloud-based solution or as an internally hosted cluster. You can instantly set it up in your local environment and take advantage of the horizontal scalability of both MongooseIM and CockroachDB. If you want to learn how to deploy both MongooseIM and CockroachDB in Kubernetes, see the documentation for CockroachDB and the Helm chart for MongooseIM, together with our recent blog post about setting up an auto-scalable cluster. If you are interested in having an auto-scalable solution deployed for you, please consider our MongooseIM Autoscaler.

Summary

MongooseIM 6.3.0 opens new possibilities for observability – the Prometheus protocol is supported instantly with a new reworked instrumentation layer underneath, guaranteeing ease of future extensions. Regarding database integration, you can now use CockroachDB to store all your persistent data. Apart from these changes, the latest version introduces a multitude of improvements and updates – see the release notes for more information. As the next step, we recommend visiting our product page to see the possible options of support and the services we offer. You can also try the server out at trymongoose.im. In any case, should you have any further questions, feel free to contact us.

The post MongooseIM 6.3: Prometheus, CockroachDB and more appeared first on Erlang Solutions.

by Pawel Chrzaszcz at November 14, 2024 10:16

November 12, 2024

ProcessOne

Docker: Keep ejabberd automagically updated with Watchtower

This blog post will guide you through the process of setting up an ejabberd Community Server using Docker and Docker Compose, and will also introduce Watchtower for automatic updates. This approach ensures that your configuration remains secure and up to date.

Furthermore, we will examine the potential risks associated with automatic updates and suggest Diun as an alternative tool for notification-based updates.

1. Prerequisites

Please ensure that Docker and Docker Compose are installed on your system.
It would be beneficial to have a basic understanding of Docker concepts, including containers, volumes, and bind-mounts.

2. Set up ejabberd in a docker container

Let’s first create a minimal Docker Compose configuration to start an ejabberd instance.

2.1: Prepare the directories

For this setup, we will create a directory structure to store the configuration, database, and logs. This will assist in maintaining an organised setup, facilitating data management and backup.

mkdir ejabberd-setup && cd ejabberd-setup
touch docker-compose.yml
mkdir conf
touch conf/ejabberd.yml
mkdir database
mkdir logs

This should give you the following structure:

ejabberd-setup/
├── conf
│   └── ejabberd.yml
├── database
├── docker-compose.yml
└── logs

To verify the structure, use the tree command. It is a very useful tool which we use on a daily basis.

Set permissions

Since we&aposll be using bind mounts in this example, it’s important to ensure that specific directories (like database and logs) have the correct permissions for the ejabberd user inside the container (UID 9000, GID 9000).

Customize or skip depending on your needs:

sudo chown -R 9000:9000 database
sudo chown -R 9000:9000 logs

Based on this Issue.

2.2: The docker-compose.yml file

Now, create a docker-compose.yml file inside, containing:

services:
  ejabberd:
    image: ejabberd/ecs:latest
    container_name: ejabberd
    ports:
      - "5222:5222"  # XMPP Client
      - "5280:5280"  # Web Admin Interface, optional
    volumes:
      - ./database:/home/ejabberd/database
      - ./ejabberd.yml:/home/ejabberd/conf/ejabberd.yml
      - ./logs:/home/ejabberd/logs
    restart: unless-stopped

2.3: The ejabberd.yml file

A basic configuration file for ejabberd will be required. we will name it conf/ejabberd.yml.

loglevel: 4
hosts:
- "localhost"

acl:
  admin:
    user:
      - "admin@localhost"

access_rules:
  local:
    allow: all

listen
  -
    port: 5222
    module: ejabberd_c2s

  -
    port: 5280                       # optional
    module: ejabberd_http            # optional
    request_handlers:                # optional
      "/admin": ejabberd_web_admin   # optional

Did you know? Since 23.10, ejabberd now offers users the option to create or update the relevant MySQL, PostgreSQL or SQLite tables automatically with each update. You can read more about it here.

3: Starting ejabberd

Finally, we&aposre set: you can run the following command to start your stack: docker-compose up -d

Your ejabberd instance should now running in a Docker container! Good job! 🎉

From there, customize ejabberd to your liking! Naturally, in this example we&aposre going to keep ejabberd in its barebones configuration, but we recommend that you configure it as you wish at this stage, to suit your needs (Domains, SSL, favorite modules, chosen database, admin accounts, etc.)

Example: You could register your admin account at this stage

To use the admin interface, you need to create an admin account. You can do so by running the following command:

$ docker exec -it ejabberd bin/ejabberdctl register admin localhost very_secret_password
> User admin@localhost successfully registered

Once this step is complete, you will then be able to access the web admin interface at http://localhost:5280/admin.

4. Set up automatic updates

Finally, we come to the most interesting part: how do I keep my containers up to date?

To keep your ejabberd instance up-to-date, you can use Watchtower, a Docker container that automatically updates other containers when new versions are available.

Warning: Auto-updates are undoubtedly convenient, but they can occasionally cause issues if an update includes breaking changes. Always test updates in a staging environment and back up your data before enabling auto-updates. Further information can be found at the end of this post.

If greater control over updates is required (for example, for mission-critical production servers or clusters), we recommend using Diun, which can notify you of available updates and allow you to decide when to apply them.

4.1: Add Watchtower to your docker-compose.yml

To include Watchtower, add it as a service in docker-compose.yml:

services:
  ejabberd:
    image: ejabberd/ecs:latest
    container_name: ejabberd
    ports:
      - "5222:5222"  # XMPP Client
      - "5280:5280"  # Web Admin Interface, optional
    volumes:
      - ./database:/home/ejabberd/database
      - ./ejabberd.yml:/home/ejabberd/conf/ejabberd.yml
      - ./logs:/home/ejabberd/logs
    restart: unless-stopped

  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_POLL_INTERVAL=3600 # Sets how often Watchtower checks for updates (in seconds).
      - WATCHTOWER_CLEANUP=true # Ensures old images are cleaned up after updating.
    restart: unless-stopped

Watchtower offers a wide range of additional features, including the ability to set up notifications, exclude specific containers, and more. For further information, please refer to the Watchtower Docs.

Once the docker-compose.yml has been updated, please bring it up using the following command: docker-compose up -d

And.... here you go, you&aposre all set!

5. Best Practices & closing words

Now Watchtower will now perform periodic checks for updates to your ejabberd container and apply them automatically.

Well to be fair, by default if other containers are running on the same server, Watchtower will also update them. This behaviour can be controlled with the help of environment variables (see Container Selection), which will assist in excluding containers from updates.


One important thing to understand is that Watchtower will only update containers tagged with the :latest tag.

In an environment with numerous Docker containers, using the latest tag streamlines the process of automatic updates. However, it may introduce unanticipated changes with each new, disruptive update. Ideally, we recommend always setting a speficic version like ejabberd/ecs:24.10 and deciding how/when to update it manually (especially if you&aposre into infra-as-code).

However, we recognise that some users may prefer the convenience of automatic updates, personnally that&aposs what I do my homelab but I&aposm not scared to dig in if stuff breaks.


tl;dr: For a small community server/homelab/personnal instance, Watchtower will help keep things up to date with minimal effort. However, for bigger production environments, it is advisable to tag specific versions to ensure greater control and resilience and update them manually.

With this setup, you now have a fully functioning XMPP server using ejabberd, with automatic updates. You can now start building your chat applications or integrate it with your existing services! 🚀

by Adrien at November 12, 2024 14:15

November 05, 2024

ProcessOne

Thoughts on Improving Messaging Protocols — Part 2, Matrix

Thoughts on Improving Messaging Protocols — Part 2, Matrix

In the first part of this blog post, I explained how the Matrix protocol works, contrasted its design philosophy with XMPP, and discussed why these differences lead to performance costs in Matrix. Matrix processes each conversation as a graph of events, merged in real-time[1].

Merge operations can be costly in Matrix for large rooms, affecting both database storage and load and disk usage when memory is exhausted, reaching swap level.

That said, there is still room for improvement in the protocol. We have designed and tested slight changes that could make Matrix much more efficient for large rooms.

A Proposal to Simplify and Speed Up Merge Operations

Here is the rationale behind a proposal we have made to simplify and speed up merge operations:

State resolution v2 uses certain graph algorithms, which can result in at least linear processing time for the number of state events in a room’s DAG, creating a significant load on servers.

The goal of this issue is to discuss and develop changes to state resolution to achieve O(n log ⁡ n) total processing time when handling a room with n state events (i.e., O(log ⁡ n) on average) in realistic scenarios, while maintaining a good user experience.

The approach described below is closer to state resolution v1 but seeks to address state resets in a different way.

For more detail, you can read our proposal on the Matrix spec tracker: Make state resolution faster.

In simpler terms, we propose adding a version associated with each event_id to simplify conflict management and introduce a heuristic that skips traversal of large parts of the graph.

Impact of the Proposal

From our initial assessment, in a very large room — such as one with 100,000 members — our approach could improve processing performance by 100x to 1000x, as the current processing cost scales with the number of users in the room. This improvement would enable smoother conversations, reduced lag, and more responsive interactions for end-users, while also reducing server infrastructure load and resource usage.

While our primary goal is to improve performance in very large rooms, these changes benefit all users by reducing overall server load and improving processing times across various room sizes.

We plan to implement this improvement in our own code to evaluate its real-world effectiveness while the Matrix team considers its potential value for the reference protocol.


  1. For those who remember, a conversation in Matrix is similar to the collaborative editing protocol built on top of XMPP for the Google Wave platform.

by Mickaël Rémond at November 05, 2024 13:53

The XMPP Standards Foundation

XMPP Summit 27

The XMPP Standards Foundation (XSF) will hold its 27th XMPP Summit in Brussels, Belgium next year again! These are the two days preceding FOSDEM 2025. The XSF invites everyone interested in development of the XMPP protocol to attend, and discuss all things XMPP in person and remote!

The Summit

The XMPP Summit is a two day event for the people who write and implement XMPP extensions (XEPs). The event is not a conference and besides small and short lightning talks there are no long presentations. The participants, where everyone is welcome, are sitting at a round table to discuss and active participation is encouraged. Similar to an unconference at the beginning all participants can suggest topics and others can indicate via votes whether or not they are interested in that topic. Afterwards a rough order of topics is established that will be followed in moderation with the participants.

If you have ever followed a thread on the standards mailing list or participated in a discussion on the public XSF channel you should be familiar with this, now only in person. The different topics are broken up by short breaks that are great for networking and getting to know other XMPP developers. Still, if you cannot participate, we will also provide an online way of joining the discussion.

Agreeing on a common strategy or even establishing a rough priority for certain features in our decentral and interoperable technology and protocol can be hard. In-person events do a lot in getting us on the same page and as an XMPP developer (e.g. client, server, gateway) we strongly encourage you to come to the summit. (Info: To get the most out of the summit you should have a background in reading (and maybe even writing) XEPs). If you are simply an enthusiastic user or admin we regularly have booths at various conferences (FOSDEM, CLT, FrOSCon, …) that are a great opportunity to meet us, too.

If we gained your attention, we will then hopefully see you at the XMPP Summit 27. Read on!

Time & Address

The venue will take place at the Thon Hotel EU including coffee break (from 09:00 o’clock) and lunch (13:00 to 14:00 o’clock) paid by the XSF in the hotel restaurant.

Date: Thursday 30th - Friday 31st, January 2025 Time: 09:00 - 17:00 o’clock (CET) (both days)

Thon Hotel EU
Room: Germany
Wetstraat / Rue de la Loi 75
1040 Brussels
Openstreetmap

Furthermore, the XSF will have its Summit Dinner on Thursday night which is paid for all participating XSF members. Everyone else is of course invited to participate, however at their own expense. Please reach out if you are participating as a non-member (see list below).

Participation

So that we can make final arrangements with the hotel, you must register before Wednesday 15th January 2025!

Please note that, although we welcome everyone to join, you must announce your attendance beforehand, as the venue is not publicly accessible. If you’re interested in attending, please make yourself known by filling out your details on the wiki page for Summit 27. To edit the page, reach out to an XSF member to enter and update your details or you’ll need a wiki account, which we’ll happily provide for you. Reach out via communication below listed. When you sign-up please book the accomodation and your travel. Please also unsign if you will not attend anymore.

Please also consider signing up if you plan to:

Communication

To ensure you receive all the relevant information, updates and announcements about the event, make sure that you’re signed up to the Summit mailing list and the Summit chatroom (Webview).

Spread the word also via our communication channels such as Mastodon and Twitter.

Sponsors

We would like to thank Isode for sponsoring the XMPP Summit again.

We also would like to thank Alexander Gnauck for sponsoring the XSF Dinner again.

Also many thanks to Daniel Gultsch investing time and resources to help organising the event again!

We appreciate support via sponsoring or even XSF sponsorship so that we can keep the event open and accessible for everyone. If you are interested, please contact the XSF Board.

We are really excited seeing many people already signing up. Looking forward to meeting all of you!

The XMPP Standards Foundation

November 05, 2024 00:00

The XMPP Newsletter October 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of October 2024.

XSF Announcements

XSF Membership

If you are interested in joining the XMPP Standards Foundation as a member, please apply until November 24th, 2024!.

XMPP Summit 27 & FOSDEM 2025

The XSF is planning the XMPP Summit 27, which is to take place on January 30th & 31st 2025 in Brussels (Belgium, Europe). Following the Summit, the XSF is also planning to be present at FOSDEM 2025, which takes place on February 1st & 2nd 2025. Find all the details in our Wiki. Please sign-up now if you are planning to attend, since this helps organizing. The event is of course open for everyone interested to participate. Spread the word within your circles!

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • Berlin XMPP Meetup [DE / EN]: monthly meeting of XMPP enthusiasts in Berlin, every 2nd Wednesday of the month at 6pm local time
  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

XMPP Articles

XMPP Software News

XMPP Clients and Applications

A basic XMPP messaging client for KaiOS

A basic XMPP messaging client for KaiOS

  • Mellium co-op has released Communique, version 0.0.1 of its instant messaging client with a terminal-based user interface. This initial release features 1:1 and multi-user chat support, HTTP file upload, ad-hoc commands, and chat history.
Communique: Initial release with features including 1:1 and multi-user chat support

Communique: Initial release with features including 1:1 and multi-user chat support

XMPP Servers

  • ejabberd 24.10: The “Bidi” Stream Release has been released. This is a major release packed with substantial improvements and support for important extensions specified by the XMPP Standard Foundation (XSF). The improvements span enhanced security and streamlined connectivity—all designed to make ejabberd more powerful and easier to use than ever.

XMPP Libraries & Tools

  • Ignite Realtime community:
    • Smack 4.5.0-beta5 released!. The Ignite Realtime developer community is happy to announce that Smack 4.5 entered its beta phase. Smack is a XMPP client API written in Java that is able to run on Java SE and Android. Smack 4.5 APIs is considered stable, however small adjustments are still possible during the beta phase.
  • go-xmpp versions 0.2.2, 0.2.3 and 0.2.4 have been released.
  • go-sendxmpp versions 0.11.3 and 0.11.4 have been released.
  • Slidge v0.2.0, the XMPP (puppeteer) gateway library in python that makes writing gateways to other chat networks (legacy modules) as frictionless as possible has been released.
  • Join Jabber added two new entries to their growing list of XMPP integration tutorials: Forgejo and Sharkey!
  • QXmpp versions 1.8.2 and 1.8.3 have been released.

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

New

  • Version 0.1.0 of XEP-0495 (Happy Eyeballs)
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 1.6.2 of XEP-0198 (Stream Management)
    • Clarify server enabling stream management without requested resume functionality. (gk)
  • Version 0.3.0 of XEP-0394 (Message Markup)
    • Add support for strong emphasis, declaring language on code blocks and making lists ordered. (lmw)
  • Version 0.1.3 of XEP-0491 (WebXDC)
    • Clarifications and wording
    • Better references for WebXDC spec (spw)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • XEP-0490: Message Displayed Synchronization

Stable

  • No XEP moved to Stable this month.

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Schimon Zachary, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: Gonzalo Raúl Nemmi
  • German: xmpp.org
    • Translators: Millesimus

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

To unsubscribe from this list, please log in first. If you have not previously logged in, you may need to set up an account with the appropriate email address.

License

This newsletter is published under CC BY-SA license.

November 05, 2024 00:00

November 04, 2024

Prosodical Thoughts

New server, new sponsor

It shouldn’t surprise you, but here we have an obsession for self-hosting. We fought off many requests to migrate our hosting to Github (even before it was cool to hate Github - Prosody and Github were both founded in the same year!).

As a result, we self-host our XMPP service (of course), our website, our code repos, our issue tracker, package repository and our CI and build system.

This is not always easy - our project has always been a rather informal collaboration of individuals, meaning it’s not a commercial venture and we don’t have any employees. For better or worse, we’re firmly rooted in the free and open-source software principles that focus on growing communities rather than profits.

As a result, we love working with people who have similar roots and values.

For many years we had a happy home for our servers with Bytemark, who were very supportive of open-source projects, including ours (they used Prosody themselves for communication, and some of their employees contributed to the project). We are grateful to them for sponsoring the hosting of our build server for many years. However, all good things come to an end - and when Bytemark was acquired in recent years by the much larger iomart Group PLC enterprise as part of a string of other acquisitions, we knew our good times with them were likely drawing to a close.

This was recently confirmed, as we and the other remaining Bytemark customers were notified that all services are being moved to another location and another of iomart’s brands. We also received an email to inform us that our sponsorship would no longer be in effect after this transition. The monthly price we were told we would have to pay for the server was many multiples of what an equivalent server would cost by today’s standards, even if we had income to pay for it.

So, we bid a final farewell to Bytemark! But as one chapter ends, another can begin.

At the time of the acquisition, many ex-Bytemark customers recommended various alternatives. However among those, one independent provider, Mythic Beasts, really stood out. You may have stumbled across them already, for their innovative Raspberry Pi hosting and handling Raspberry Pi launch announcements on a stack of Raspberry Pi devices, or you may have come across them on the Fediverse via their (self-hosted, of course) @beasts Mastodon account. As well as Raspberry Pi hosting, of course they also offer conventional (dedicated and virtual) servers, DNS, traditional web space, and more.

Mythic Beasts logo

Mythic Beasts turned out to be just what we were looking for - a no-nonsense service-driven provider where you’ll find founders answering support tickets and where providing amazing service and having fun while doing so are deemed more important than maximizing growth and shareholder value.

Running services with a hosting provider is a kind of partnership that requires placing a certain amount of trust. Trust that they are competent, that it’s easy to contact someone if things go wrong, and that their values are aligned with yours for the long term. It’s hard to find providers that tick all these boxes.

Having used Mythic Beasts for a few things personally in recent years, I felt increasingly confident they would be a good home for Prosody’s infrastructure too. In fact they’ve been very supportive and understanding from the moment I reached out about Prosody’s situation, and have generously provided us with capacity to migrate all our services across and retire our old servers. You may have noticed a few blips in recent weeks as we did just that. Thanks for bearing with us!

All our services are now running smoothly on VMs provided by Mythic Beasts, and we can’t thank them enough as they enable us to continue our journey. It feels great to be with a provider that not only knows but cares about things like open-source, environmental impact, as well as IPv6, DNSSEC and all the other internet tech we care about too.

For those of you curious, here’s a list (probably not exhaustive) of things we are currently running as part of the project’s infrastructure:

If you notice any post-migration issues with our site or services, drop by the chat and let us know! Also, if you’re in need of hosting, now you know where we would suggest looking first :)

by The Prosody Team at November 04, 2024 10:00

November 01, 2024

Ignite Realtime Blog

Openfire 4.9.1 release

The Ignite Realtime community is happy to be able to announce the immediate availability of version 4.9.1 of Openfire, its cross-platform real-time collaboration server based on the XMPP protocol!

4.9.1 is a bugfix and maintenance release. Among its most important fixes is one for a memory leak that affected all recent versions of Openfire (but was likely noticeable only on those servers that see high volume of users logging in and out). The complete list of changes that have gone into this release can be seen in the change log.

Please give this version a try! You can download installers of Openfire here. Our documentation contains an upgrade guide that helps you update from an older version.

The integrity of these artifacts can be checked with the following sha256sum values:

8c489503f24e35003e2930873037950a4a08bc276be1338b6a0928db0f0eb37d  openfire-4.9.1-1.noarch.rpm
1e80a119c4e1d0b57d79aa83cbdbccf138a1dc8a4086ac10ae851dec4f78742d  openfire_4.9.1_all.deb
69a946dacd5e4f515aa4d935c05978b5a60279119379bcfe0df477023e7a6f05  openfire_4_9_1.dmg
c4d7b15ab6814086ce5e8a1d6b243a442b8743a21282a1a4c5b7d615f9e52638  openfire_4_9_1.exe
d9f0dd50600ee726802bba8bc8415bf9f0f427be54933e6c987cef7cca012bb4  openfire_4_9_1.tar.gz
de45aaf1ad01235f2b812db5127af7d3dc4bc63984a9e4852f1f3d5332df7659  openfire_4_9_1_x64.exe
89b61cbdab265981fad4ab4562066222a2c3a9a68f83b6597ab2cb5609b2b1d7  openfire_4_9_1.zip

We would love to hear from you! If you have any questions, please stop by our community forum or our live groupchat. We are always looking for volunteers interested in helping out with Openfire development!

For other release announcements and news follow us on Mastodon or X

6 posts - 4 participants

Read full topic

by guus at November 01, 2024 19:54

October 31, 2024

JMP

Newsletter: JMP at SeaGL, Cheogram now on Amazon

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client. Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

JMP at SeaGL

The Seattle GNU/Linux Conference (SeaGL) is happening next week and JMP will be there! We’re going to have a booth with some of our employees, and will have JMP eSIM Adapters and USB card readers for purchase (if you prefer to save on shipping, or like to pay cash or otherwise), along with stickers and good conversations. :) The exhibition area is open all day on Friday and Saturday, November 8 and 9, so be sure to stop by and say hi if you happen to be in the area. We look forward to seeing you!

Cheogram Android in Amazon Appstore

We have just added Cheogram Android to the Amazon Appstore! And we also added Cheogram Android to Aptoide earlier this month. While F-Droid remains our preferred official source, we understand many people prefer to use stores that they’re used to, or that come with their device. We also realize that many people have been waiting for Cheogram Android to return to the Play Store, and we wanted to provide this other option to pay for Cheogram Android while Google works out the approval process issues on their end to get us back in there. We know a lot of you use and recommend app store purchases to support us, so let your friends know about this new Amazon Appstore option for Cheogram Android if they’re interested!

New features in Cheogram Android

As usual, we’ve added a bunch of new features to Cheogram Android over the past month or so. Be sure to update to the latest version (2.17.2-1) to check them out! (Note that Amazon doesn’t have this version quite yet, but it should be there shortly.) Here are the notable changes since our last newsletter: privacy-respecting link previews (generated by sender), more familiar reactions, filtering of conversation list by account, nicer autocomplete for mentions and emoji, and fixes for Android 15, among many others.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

by Denver Gingerich at October 31, 2024 18:01

Erlang Solutions

Why you should consider machine learning for business

Adopting machine learning for business is necessary for companies that want to sharpen their competitive industries. With the global market for machine learning projected to reach an impressive $210 billion by 2030, businesses are keen to seek active solutions that streamline processes and improve customer interactions.

While organisations may already employ some form of data analysis, traditional methods can need more sophistication to address the complexities of today’s market. Businesses that consider optimising machines unlock valuable data insights, make accurate predictions and deliver personalised experiences that truly resonate with customers, ultimately driving growth and efficiency.

What is Machine Learning?

Machine learning (ML) is a subset of artificial intelligence (AI). It uses machine learning algorithms, designed to learn from data, identify patterns, and make predictions or decisions, without explicit programming. By analysing patterns in the data, a machine learning algorithm identifies key features that define a particular data point, allowing it to apply this knowledge to new, unseen information.

Fundamentally data-driven, machine learning relies on vast information to learn, adapt, and improve over time. Its predictive capabilities allow models to forecast future outcomes based on the patterns they uncover. These models are generalisable, so they can apply insights from existing data to make decisions or predictions in unfamiliar situations.

You can read more about machine learning and AI in our previous post.

Approaches to Machine Learning

Machine learning for business typically involves two key approaches: supervised and unsupervised learning, each suited to different types of problems. Below, we explain each approach and provide examples of machine learning use cases where these techniques are applied effectively.

  • Supervised Machine Learning: This approach demands labelled data, where the input is matched with the correct output. The algorithms learn to map inputs to outputs based on this training set, honing their accuracy over time.
  • Unsupervised Machine Learning: In contrast, unsupervised learning tackles unlabelled data, compelling the algorithm to uncover patterns and structures independently. This method can involve tasks like clustering and dimensionality reduction. While unsupervised techniques are powerful, interpreting their results can be tricky, leading to challenges in assessing whether the model is truly on the right track.
Machine learning for business Supervised vs unsupervised learning

Example of Supervised vs unsupervised learning

Supervised learning uses historical data to make predictions, helping businesses optimise performance based on past outcomes. For example, a retailer might use supervised learning to predict customer churn. By feeding the algorithm data such as customer purchase history and engagement metrics, it learns to identify patterns that indicate a high risk of churn, allowing the business to implement proactive retention strategies.

Unsupervised learning, on the other hand, uncovers hidden patterns within data. It is particularly useful for discovering new customer segments without prior labels. For instance, an e-commerce platform might use unsupervised learning to group customers by their browsing habits, discovering niche audiences that were previously overlooked.

The Impact of Machine Learning on Business

A recent survey by McKinsey revealed that 56% of organisations surveyed are using machine learning in at least one business function to optimise their operations. This growing trend shows how machine learning for business is becoming integral to staying competitive.

The AI market as a whole is also on an impressive growth trajectory, projected to reach USD 407.0 billion by 2027

Machine learning for business AI Global Market Forecast to 2030

AI Global Market Forecast to 2030

We’re expected to see an astounding compound growth rate (CAGR) of 35.7% by 2030, proving that business analytics is no longer just a trend; it’s moving into a core component of modern enterprises.

Machine Learning for Business Use Cases

Machine learning can be used in numerous ways across industries to enhance workflows. From image recognition to fraud detection, businesses are actively using AI to streamline operations.

Image Recognition

Image recognition, or image classification is a powerful machine learning technique used to identify and classify objects or features in digital images. 

Artificial intelligence (AI) and machine learning (ML) are revolutionising image recognition systems by uncovering hidden patterns in images that may not be visible to the human eye. This technology allows these systems to make independent and informed decisions, significantly reducing the reliance on human input and feedback. 

As a result, visual data streams can be processed automatically at an ever-increasing scale, streamlining operations and enhancing efficiency. By harnessing the power of AI, businesses can leverage these insights to improve their decision-making processes and gain a competitive edge in their respective markets.

It plays a crucial role in tasks like pattern recognition, face detection, and facial recognition, making it indispensable in security and social media sectors.

Fraud Detection

With financial institutions handling millions of transactions daily, distinguishing between legitimate and fraudulent activity can be a challenge. As online banking and cashless payments grow, so too has the volume of fraud. A 2023 report from TransUnion revealed a 122% increase in digital fraud attempts in the US between 2019 and 2022. 

Machine learning helps businesses by flagging suspicious transactions in real-time, with companies like Mastercard using AI to predict and prevent fraud before it occurs, protecting consumers from potential theft.

Speech Recognition

Voice commands have become a common feature in smart devices, from setting timers to searching for shows. 

Thanks to machine learning, devices like Google Nest speakers and Amazon Blink security systems can recognise and act on voice inputs, making hands-free operation more convenient for users in everyday situations.

Improved Healthcare

Machine learning in healthcare has led to major improvements in patient care and medical discoveries. By analysing vast amounts of healthcare data, machine learning enhances the accuracy of diagnoses, optimises treatments, and accelerates research outcomes.

For instance, AI systems are already employed in radiology to detect diseases in medical images, such as identifying cancerous growths. Additionally, machine learning is playing a crucial role in genomic research by uncovering patterns linked to genetic disorders and potential therapies. These advancements are paving the way for improved diagnostics and faster medical research, offering tremendous potential for the future of healthcare.

Key applications of machine learning in healthcare include:

  • Developing predictive modelling
  • Improving diagnostic accuracy
  • Personalising patient care
  • Automating clinical workflows
  • Enhancing patient interaction

Machine learning in healthcare utilises algorithms and statistical models to analyse large medical datasets, facilitating better decision-making and personalised care. As a subset of AI, machine learning identifies patterns, makes predictions, and continuously improves by learning from data. Different types of learning, including supervised and unsupervised learning, find applications in disease classification and personalised treatment recommendations.

Chatbots

Many businesses rely on customer support to maintain satisfaction. However, staffing trained specialists can be expensive and inefficient. AI-powered chatbots, equipped with natural language processing (NLP), assist by handling basic customer queries. This frees up human agents to focus on more complicated issues. Companies can provide more efficient and effective support without overburdening their teams.

Each of these applications offers businesses the chance to streamline operations and improve customer experiences. 

Machine Learning Case Studies

Machine learning for business is transforming industries by enabling companies to enhance their operations, improve customer experiences, and drive innovation. 

Here are a few machine learning case studies showing how leading organisations have integrated machine learning into their business strategies.

PayPal

PayPal, a worldwide payment platform, faced huge challenges in identifying and preventing fraudulent transactions. 

Machine learning for business PayPal case study


To tackle this issue, the company implemented machine learning algorithms designed for fraud detection. These algorithms analyse various aspects of each transaction, including the transaction location, the device used, and the user’s historical behaviour. This approach has significantly enhanced PayPal’s ability to protect users and maintain the integrity of its payment platform.

YouTube

YouTube has long employed machine learning to optimise its operations, particularly through its recommendation algorithms. By analysing vast amounts of historical data, YouTube suggests videos to its viewers based on their preferences. Currently, the platform processes over 80 billion data points for each user, requiring large-scale neural networks that have been in use since 2008 to effectively manage this immense dataset.

Machine learning for business YouTube case study

Dell

Recognising the importance of data in marketing, Dell’s marketing team sought a data-driven solution to enhance response rates and understand the effectiveness of various words and phrases. Dell partnered with Persado, a firm that leverages AI to create compelling marketing content. This collab led to an overhaul of Dell’s email marketing strategy, resulting in a 22% average increase in page visits and a 50% boost in click-through rates (CTR). Dell now utilises machine learning methods to refine its marketing strategies across emails, banners, direct mail, Facebook ads, and radio content.

Machine learning for business case study Dell

Tesla

Tesla employs machine learning to enhance the performance and features of its electric vehicles. A key application is its Autopilot system, which combines cameras, sensors, and machine learning algorithms to provide advanced driver assistance features such as lane centring, adaptive cruise control, and automatic emergency braking.

 case study Tesla

The Autopilot system uses deep neural networks to process vast amounts of real-world driving data, enabling it to predict driving behaviour and identify potential hazards. Additionally, Tesla leverages machine learning in its battery management systems to optimise battery performance and longevity by predicting behaviour under various conditions.

Netflix

Netflix is a leader in personalised content recommendations. It uses machine learning to analyse user viewing habits and suggest shows and movies tailored to individual preferences. This feature has proven essential for improving customer satisfaction and increasing subscription renewals. To develop this system, Netflix utilises viewing data—including viewing durations, metadata, release dates, timestamps etc. Netflix then employs collaborative filtering, matrix factorisation, and deep learning techniques to accurately predict user preferences.

case study Netflix

Benefits of Machine Learning in Business

If you’re still contemplating the value of machine learning for your business, consider the following key benefits:

Automation Across Business ProcessesMachine learning automates key business functions, from marketing to manufacturing, boosting yield by up to 30%, reducing scrap, and cutting testing costs. This frees employees from more creative, strategic tasks.
Efficient Predictive Maintenance
ML helps manufacturing predict equipment failures, reducing downtime and extending machinery lifespan, ensuring operational continuity.
Enhanced Customer Experience and Accurate Sales ForecastsRetailers use machine learning to analyse consumer behaviour, accurately forecast demand, and personalise offers, greatly improving customer experience.
Data-Driven Decision-MakingML algorithms quickly extract insights from data, enabling faster, more informed decision-making and helping businesses develop effective strategies.
Error ReductionBy automating tasks, machine learning reduces human error, so employees to focus on complex tasks, significantly minimising mistakes.
Increased Operational EfficiencyAutomation and error reduction from ML lead to efficiency gains. AI systems like chatbots boost productivity by up to 54%, operating 24/7 without fatigue.
Enhanced Decision-MakingML processes large data sets swiftly, turning information into objective, data-driven decisions, removing human bias and improving trend analysis.
Addressing Complex Business IssuesMachine learning tackles complex challenges by streamlining operations and boosting performance, enhancing productivity and scalability.


As organisations increasingly adopt machine learning, they position themselves to not only meet current demands but poise them for future innovation.

Elixir and Erlang in Machine Learning

As organisations explore machine learning tools, many are turning to Erlang and Elixir programming languages to develop customised solutions that cater to their needs. Erlang’s fault tolerance and scalability make it ideal for AI applications, as described in our blog on adopting AI and machine learning for business. Additionally, Elixir’s concurrency features and simplicity enable businesses to build high-performance AI applications. 

Learn more about how to build a machine-learning project in Elixir here.

As organisations become more familiar with AI and machine learning tools, many are turning to Erlang and Elixir programming languages to develop customised solutions that cater to their needs. 

Elixir, built on the Erlang virtual machine (BEAM), delivers top concurrency and low latency. Designed for real-time, distributed systems, Erlang prioritises fault tolerance and scalability, and Elixir builds on this foundation with a high-level, functional programming approach. By using pure functions and immutable data, Elixir reduces complexity and minimises unexpected behaviours in code. It excels at handling multiple tasks simultaneously, making it ideal for AI applications that need to process large amounts of data without compromising performance. 

Elixir’s simplicity in problem-solving also aligns perfectly with AI development, where reliable and straightforward algorithms are essential for machine learning. Furthermore, its distribution features make deploying AI applications across multiple machines easier, meeting the high computational demands of AI systems.

With a rich ecosystem of libraries and tools, Elixir streamlines development, so AI applications are scalable, efficient, and reliable. As AI and machine learning become increasingly vital to business success, creating high-performing solutions will become a key competitive advantage.

Final Thoughts

Embracing machine learning for business is no longer optional for companies that want to remain competitive. Machine learning tools empower businesses to make faster, data-driven decisions, streamline operations, and offer personalised customer experiences. Contact the Erlang Solutions team today if you’d like to discuss building AI systems using Elixir and Erlang or for more insights into implementing machine learning solutions,

The post Why you should consider machine learning for business appeared first on Erlang Solutions.

by Erlang Solutions Team at October 31, 2024 10:30

October 29, 2024

ProcessOne

ejabberd 24.10

ejabberd 24.10

We’re excited to announce ejabberd 24.10, a major release packed with substantial improvements and support for important extensions specified by the XMPP Standard Foundation (XSF). This release represents three months of focused development, bringing around 100 commits to the core repository alongside key updates in dependencies. The improvements span enhanced security and streamlined connectivity—all designed to make ejabberd more powerful and easier to use than ever.

ejabberd 24.10

Release Highlights:

If you are upgrading from a previous version, please note minor changes in commands and two changes in hooks. There are no configuration or SQL schema changes in this release.

Below is a detailed breakdown of the new features, fixes, and enhancements:

Support for XEP-0288: Bidirectional Server-to-Server Connections

The new mod_s2s_bidi module introduces support for XEP-0288: Bidirectional Server-to-Server Connections. This update removes the requirement for two connections per server pair in XMPP federations, allowing for more streamlined inter-server communications. However, for full compatibility, ejabberd can still connect to servers that do not support bidirectional connections, using two connections when necessary. The module is enabled by default in the sample configuration.

Support for XEP-0480: SASL Upgrade Tasks

The new mod_scram_upgrade module implements XEP-0480: SASL Upgrade Tasks. Compatible clients can now automatically upgrade encrypted passwords to more secure formats, enhancing security with minimal user intervention.

PubSub Service Improvements

We’ve implemented six noteworthy fixes to improve PubSub functionality:

  • PEP notifications are sent only to owners when +notify (3469a51)
  • Non-delivery errors for locally generated notifications are now skipped (d4b3095)
  • Fix default node config parsing (b439929)
  • Fix merging of default node options (ca54f81)
  • Fix choice of node config defaults (a9583b4)
  • Fall back to default plugin options (36187e0)

IQ permission for privileged entities

The mod_privilege module now supports IQ permission based on version 0.4 of XEP-0356: Privileged Entity. See #3889 for details. This feature is especially useful for XMPP gateways using the Slidge library.

WebAdmin improvements

ejabberd 24.06 release laid the foundation for a more streamlined WebAdmin interface, reusing existing commands instead of using specific code, with a possibly different logic. This major change allows developers to add new pages very fast, just by calling existing commands. It also allows administrators to use the same commands than in ejabberdctl or any other command frontend.

As a result, many new pages and content were added. Building on that, the 24.10 update introduces MAM (Message Archive Management) support, allowing administrators to view message counts, remove all MAM messages, or only for a specific contact, and also view the MAM Archive directly from WebAdmin.

ejabberd 24.10

Additionally, WebAdmin now hides pages related to modules that are disabled, preventing unnecessary options from displaying. This affects mod_last, mod_mam, mod_offline, mod_privacy, mod_private, mod_roster, mod_vcard.

Fixes in commands

  • set_presence: Now returns an error when the session is not found.

  • send_direct_invitation: Improved handling of malformed JIDs.

  • update: Fix command output. So far, ejabberd_update:update/0 returned the return value of release_handler_1:eval_script/1. That function returns the list of updated but unpurged modules, i.e., modules where one or more processes are still running an old version of the code. Since commit 5a34020d23f455f80a144bcb0d8ee94770c0dbb1, the ejabberd update command assumes that value to be the list of updated modules instead. As that seems more useful, modify ejabberd_update:update/0 accordingly. This fixes the update command output.

  • get_mam_count: New command to retrieve the number of archived messages for a specific account.

Changes in hooks

Two key changes in hooks:

  • New check_register_user hook in ejabberd_auth.erl to allow blocking account registration when a tombstone exists.

  • Modified room_destroyed hook in mod_muc_room.erl. Until now the hook passed as arguments: LServer, Room, Host. Now it passes: LServer, Room, Host, Persistent That new Persistent argument passes the room persistent option, required by mod_tombstones because only persistent rooms should generate a tombstone, temporary ones should not. And the persistent option should not be completely overwritten, as we must still known its real value even when room is being destroyed.

Log Erlang/OTP and Elixir versions

During server start, ejabberd now shows in the log not only its version number, but also the Erlang/OTP and Elixir versions being used. This will help the administrator to determine what software versions are being used, which is specially useful when investigating some problem, and explaining it to other people for help.

The ejabberd.log file now looks like this:

...
2024-10-22 13:47:05.424 [info] Creating Mnesia disc_only table &aposoauth_token&apos
2024-10-22 13:47:05.427 [info] Creating Mnesia disc table &aposoauth_client&apos
2024-10-22 13:47:05.455 [info] Waiting for Mnesia synchronization to complete
2024-10-22 13:47:05.591 [info] ejabberd 24.10 is started in the node :ejabberd@localhost in 1.93s
2024-10-22 13:47:05.606 [info] Elixir 1.16.3 (compiled with Erlang/OTP 26)
2024-10-22 13:47:05.606 [info] Erlang/OTP 26 [erts-14.2.5.4] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [jit:ns]

2024-10-22 13:47:05.608 [info] Start accepting TCP connections at 127.0.0.1:7777 for :mod_proxy65_stream
2024-10-22 13:47:05.608 [info] Start accepting UDP connections at [::]:3478 for :ejabberd_stun
2024-10-22 13:47:05.608 [info] Start accepting TCP connections at [::]:1883 for :mod_mqtt
2024-10-22 13:47:05.608 [info] Start accepting TCP connections at [::]:5280 for :ejabberd_http
...

Brand new ProcessOne and ejabberd web sites

We’re excited to unveil the redesigned ProcessOne website, crafted to better showcase our expertise in large-scale messaging across XMPP, MQTT, Matrix, and more. This update highlights our core mission of delivering scalable, reliable messaging solutions, with a fresh layout and streamlined structure that reflect our cutting-edge work in the field.

You now get a cleaner ejabberd page, offering quick access to important URLs for downloads, blog posts, and documentation.

Behind the scenes, we’ve transitioned from WordPress to Ghost, a move inspired by its efficient, user-friendly authoring tools and long-term maintainability. All previous blog content has been preserved, and with this new setup, we’re poised to deliver more frequent updates on messaging, XMPP, ejabberd, and related topics.

We welcome your feedback—join us on our new site to share your thoughts, or let us know about any issue or broken link!

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker...

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get MUC support in mod_unread.

ejabberd keeps a counter of unread messages per conversation using the mod_unread module. This now also works in MUC rooms: each user can retrieve the number of unread messages in each of their rooms.

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Miscelanea

  • ejabberd_c2s: Optionally allow unencrypted SASL2
  • ejabberd_system_monitor: Handle call by gen_event:swap_handler (#4233)
  • ejabberd_http_ws: Remove support for old websocket connection protocol
  • ejabberd_stun: Omit auth_realm log message
  • ext_mod: Handle info message when contrib module transfers table ownership
  • mod_block_strangers: Add feature announcement to disco-info (#4039)
  • mod_mam: Advertise XEP-0424 feature in server disco-info (#3340)
  • mod_muc_admin: Better handling of malformed jids in send_direct_invitation command
  • mod_muc_rtbl: Fix call to gen_server:stop (#4260)
  • mod_privilege: Support "IQ permission" from XEP-0356 0.4.1 (#3889)
  • mod_pubsub: Don&apost blindly echo PEP notification
  • mod_pubsub: Skip non-delivery errors for local pubsub generated notifications
  • mod_pubsub: Fall back to default plugin options
  • mod_pubsub: Fix choice of node config defaults
  • mod_pubsub: Fix merging of default node options
  • mod_pubsub: Fix default node config parsing
  • mod_register: Support to block IPs in a vhost using append_host_config (#4038)
  • mod_s2s_bidi: Add support for S2S Bidirectional
  • mod_scram_upgrade: Add support for SCRAM upgrade tasks
  • mod_vcard: Return error stanza when storage doesn&apost support vcard update (#4266)
  • mod_vcard: Return explicit error stanza when user attempts to modify other&aposs vcard
  • Minor improvements to support mod_tombstones (#2456)
  • Update fast_xml to use use_maps and remove obsolete elixir files
  • Update fast_tls and xmpp to improve s2s fallback for invalid direct tls connections
  • make-binaries: Bump dependency versions: Elixir 1.17.2, OpenSSL 3.3.2, ...

Administration

  • ejabberdctl: If ERLANG_NODE lacks host, add hostname (#4288)
  • ejabberd_app: At server start, log Erlang and Elixir versions
  • MySQL: Fix column type in the schema update of archive table in schema update

Commands API

  • get_mam_count: New command to get number of archived messages for an account
  • set_presence: Return error when session not found
  • update: Fix command output
  • Add mam and offline tags to the related purge commands

Code Quality

  • Fix warnings about unused macro definitions reported by Erlang LS
  • Fix Elvis report: Fix dollar space syntax
  • Fix Elvis report: Remove spaces in weird places
  • Fix Elvis report: Don&apost use ignored variables
  • Fix Elvis report: Remove trailing whitespace characters
  • Define the types of options that opt_type.sh cannot derive automatically
  • ejabberd_http_ws: Fix dialyzer warnings
  • mod_matrix_gw: Remove useless option persist
  • mod_privilege: Replace try...catch with a clean alternative

Development Help

  • elvis.config: Fix file syntax, set vim mode, disable many tests
  • erlang_ls.config: Let it find paths, update to Erlang 26, enable crossref
  • hooks_deps: Hide false-positive warnings about gen_mod
  • Makefile: Add support for make elvis when using rebar3
  • .vscode/launch.json: Experimental support for debugging with Neovim
  • CI: Add Elvis tests
  • CI: Add XMPP Interop tests
  • Runtime: Cache hex.pm archive from rebar3 and mix

Documentation

  • Add links in top-level options documentation to their Docs website sections
  • Document which SQL servers can really use update_sql_schema
  • Improve documentation of ldap_servers and ldap_backups options (#3977)
  • mod_register: Document behavior when access is set to none (#4078)

Elixir

  • Handle case when elixir support is enabled but not available
  • Start ExSync manually to ensure it&aposs started if (and only if) Relive
  • mix.exs: Fix mix release error: logger being regular and included application (#4265)
  • mix.exs: Remove from extra_applications the apps already defined in deps (#4265)

WebAdmin

  • Add links in user page to offline and roster pages
  • Add new "MAM Archive" page to webadmin
  • Improve many pages to handle when modules are disabled
  • mod_admin_extra: Move some webadmin pages to their modules

Full Changelog

https://github.com/processone/ejabberd/compare/24.07...24.10

ejabberd 24.10 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues.

by Jérôme Sautret at October 29, 2024 14:26

October 24, 2024

Erlang Solutions

Implementing Phoenix LiveView: From Concept to Production

When I began working with Phoenix LiveView, the project evolved from a simple backend service into a powerful, UI-driven customer service tool. A basic Phoenix app for storing user data quickly became a core part of our client’s workflow.

In this post, I’ll take you through a project that grew from its original purpose- from a service for storing and serving user data to a LiveView-powered application that is now a key tool in the client’s organisation for customer service.

Why We Chose Phoenix LiveView

Our initial goal was to migrate user data from an external, paid service to a new in-house solution, developed collaboratively by Erlang Solutions (ESL) and the client’s teams.

With millions of users, we needed a simple way to verify migrated data without manually connecting to the container and querying the database every time.

Since the in-house service was a Phoenix application that uses Ecto and Postgres, adding LiveView was the most natural fit.

Implementing Phoenix LiveView: Data Migration and UI Development

After we had established the goal, the next step was to create a database service to store and serve user information to other services, as well as to migrate all existing user data from an external service to the new one.

We chose Phoenix with Ecto and Postgres, as the old database was already connected to a Phoenix application, and the client’s team was well-versed in Elixir and BEAM.

Data Migration Strategy

The ESL and client teams’ strategy began by slowly copying user data from the old service to the new database whenever users logged in. For certain users (e.g., developers), we logged them in and pulled user information only from the new system. We defined a new login session struct (Elixir struct), which we used for pattern matching to determine whether to use the old or new system. The old system was treated as a fallback and the source of truth for user data.

Phoenix LiveView Migration to in-house database

With this strategy, we could develop and test the new database system in parallel with the old one in production, without affecting regular users, and ensured that everything worked as expected.

At the end, we performed a data dump for all users, configuring the service to use the new system as the main source of truth. Since we had tested with a small number of users beforehand, the transition was smooth, and users had no idea anything had changed from their end. Response times were cut in half compared to the previous solution!

The Evolution of LiveView Application

The addition of LiveView to the application was first thought of when the ESL team together with the client team wanted to check the test migration data. The team wanted to be able to cross reference immediately if the user data has been inserted or updated as intended in our new service. It was complicated and cumbersome at first as we had to connect to the application remotely and do a manual query or call an internal function from a remote Elixir shell.

Phoenix LiveVie: Evolution of LiveView Application

Initially, LiveView was developed solely for the team. We started with a simple table listing users, then added search functionality for IDs or emails, followed by pagination as the test data grew. With the simple UI using LiveView in place, we started with the data migration process and the UI helped tremendously when we went to verify if the data got migrated correctly, and how many users we have successfully migrated. 

Adoption and Expansion of the LiveView Tool

As we demonstrated the UI to stakeholders, it quickly became the go-to tool for customer service, with new features continuously added based on feedback. The development team received many requests from customer service and other managers in the client’s organisation. We fulfilled these requests with features such as searching users by a combination of fields, helping change users’ email addresses, and checking user activity (e.g., when a user’s email was changed or if users suspected they had been hacked).

Later, we connected the LiveView application to sync and display data from another internal service, which contained information about users’ access to the client’s product. The customer service team was able to get a more complete view of the user and could use the same tool to grant or sync user access without switching to other systems.

The best aspect of using Phoenix LiveView is that the development team also owned the UI. We determined the data structure, knew what needed to be there, and designed the LiveView page ourselves. This removed the need to rely on another team, and we could reflect changes swiftly in the web views without having to coordinate with external teams.

Challenges and Feedback of Implementing Phonenix LiveView

There were some glitches along the way, and when we asked for feedback from the customer service team, we found several UX aspects that could be improved. For example, data didn’t always update immediately, or buttons occasionally failed to work properly. However, these issues also indicated that the Phoenix LiveView application was used heavily by the team, emphasising the need for improvements to support better workflows.

While our LiveView implementation worked well, it wasn’t without imperfections. Most of our development team lacked extensive web development experience, so there were several aspects we either overlooked or didn’t fully consider. Some team members had limited knowledge of web technologies like Tailwind and CSS/HTML, which helped guide us, but we realised that for a more polished user experience (UX) and smoother interface, basic HTML/CSS skills alone wouldn’t be sufficient to create an optimal LiveView application.

Another challenge was infrastructure. Since our service was read-heavy, we used AWS RDS reader instances to maximise performance, but this led to occasional replication delays. These delays could cause mismatches when customer service updated data and LiveView reloaded the page before the updates had replicated to the reader instances. We had to carefully consider when it was appropriate to use the reader instances and adjust our approach accordingly.

Team Dynamics and Collaboration

Mob programming way of working was also one of the factors that led to the success of this project.  Our team consists of members with different expertise. By working together, we can discuss and share our experiences while programming together, instead of having to explain later in code review or knowledge sharing what each of us has implemented and why. For example, we guided a member who had more experience in Erlang/OTP through creating a form with Liveview, which needed more experience in Ecto and Phoenix. That member could then explain and guide others with OTP-related implementation in our services. 

Mob programming helped our team focus on one large task at a time. This collaborative approach ensured a consistent codebase with unified conventions, leading to efficient feature implementation.

Conclusion

What began as a simple backend project with Phoenix and Ecto evolved into a key tool for customer service, driven by the power of Phoenix LiveView. The Admin page, initially unplanned, became an integral part of the client’s workflow, proving the vast potential of LiveView and Elixir.

Though we encountered challenges, LiveView’s real-time interactivity, seamless integration, and developer control over both the backend and UI were invaluable. We believe we’ve only scratched the surface of what developers can achieve with LiveView.

Want to learn more about LiveView? Check out this article. If you’re exploring Phoenix LiveView for your project, feel free to reach out—we’d love to share our experience and help you unlock its full potential.

The post Implementing Phoenix LiveView: From Concept to Production appeared first on Erlang Solutions.

by Phuong Van at October 24, 2024 09:22

October 22, 2024

ProcessOne

ProcessOne Unveils New Website

We’re excited to announce the relaunch of our website, designed to better showcase our expertise in large-scale messaging solutions, highlighting our full spectrum of supported protocols—from XMPP to MQTT and Matrix. This reflects our core strength: delivering reliable messaging at scale.

The last major redesign was back in October 2017, so this update was long overdue. As we say farewell to the old design, here’s a screenshot of the previous version to commemorate the journey so far.

alt

In addition to refreshing the layout and structure, we’ve made a significant change under the hood by migrating from WordPress to Ghost. After using Ghost for my personal blog and being thoroughly impressed, we knew it was the right choice for ProcessOne. The new platform offers not only long-term maintainability but also a much more streamlined, enjoyable day-to-day experience, thanks to its faster and more efficient authoring tools.

All of our previous blog content has been successfully migrated, and we’re now in a great position to deliver more frequent updates on topics such as messaging, XMPP, ejabberd, MQTT, and Matrix. Stay tuned for exciting new posts!

We’d love to hear your feedback and suggestions on what topics you’d like us to cover next. To join the conversation, simply create an account on our site and share your thoughts.

by Mickaël Rémond at October 22, 2024 14:05

ProcessOne Unveils New Website

We’re excited to announce the relaunch of our website, designed to better showcase our expertise in large-scale messaging solutions, highlighting our full spectrum of supported protocols—from XMPP to MQTT and Matrix. This reflects our core strength: delivering reliable messaging at scale.

The last major redesign was back in October 2017, so this update was long overdue. As we say farewell to the old design, here’s a screenshot of the previous version to commemorate the journey so far.

alt

In addition to refreshing the layout and structure, we’ve made a significant change under the hood by migrating from WordPress to Ghost. After using Ghost for my personal blog and being thoroughly impressed, we knew it was the right choice for ProcessOne. The new platform offers not only long-term maintainability but also a much more streamlined, enjoyable day-to-day experience, thanks to its faster and more efficient authoring tools.

All of our previous blog content has been successfully migrated, and we’re now in a great position to deliver more frequent updates on topics such as messaging, XMPP, ejabberd, MQTT, and Matrix. Stay tuned for exciting new posts!

We’d love to hear your feedback and suggestions on what topics you’d like us to cover next. To join the conversation, simply create an account on our site and share your thoughts.

by Mickaël Rémond at October 22, 2024 14:05

October 21, 2024

Jabber.org Notices

Server Migration

On October 19th, we completed a migration of the service to a brand new server. Everything looks solid, but if you experience problems feel free to contact the administrator.

October 21, 2024 00:00

October 17, 2024

Erlang Solutions

Client Case Studies with Erlang Solutions

At Erlang Solutions, we’ve worked with diverse clients, solving business challenges and delivering impactful results. We would like to share just some of our top client case studies in this latest post with you.

Get a glimpse into how our leading technologies—Erlang, Elixir, MongooseIM, and more—combined with our expert team, have transformed the outcomes for major industry players.

Transforming streaming with zero downtime for TV4

Our first client case study is our partnership with TV4. The leading Nordic broadcaster needed to address major challenges in the competitive streaming industry. With global giants like Netflix and Disney Plus on the rise, TV4 needed to unify user data from multiple platforms into a seamless streaming experience for millions of subscribers.

Using Elixir, we ensured a smooth migration and helped TV4 reduce infrastructure costs and improve efficiency. 

TV4 Erlang Solutions client case study

Check out the full TV4 case study

Financial services with secure messaging solutions with Teleware

Erlang Solutions partnered with Teleware to enhance their Reapp with secure instant messaging (IM) capabilities for a major UK financial services group. As TeleWare aimed to meet strict regulatory requirements while improving user experience, they needed a robust, scalable solution that could seamlessly integrate into their existing infrastructure. 

We utilised MongooseIM’s out-of-the-box functionality, and Teleware quickly integrated group chat features that allowed secure collaboration while meeting the Financial Conduct Authority (FCA) compliance standards. 

Teleware Erlang Solutions

Take a look at the full Teleware case study.

Gaming experiences with enhanced scalability and performance for FACEIT

FACEIT, the leading independent competitive gaming platform with over 25 million users, had some scalability and performance challenges. As its user base grew, FACEIT needed to upgrade their systems to handle hundreds of thousands of players seamlessly. 

By upgrading to the latest version of MongooseIM and Erlang, we delivered a solution that managed large user lists and improved overall system efficiency.

FACEIT Erlang Solutions client case study

Explore the full FACEIT case study.

Rapid growth with scalable systems for BET Software

In another one of our client case studies, we worked with BET Software, a leading betting software provider in South Africa, to address the challenges posed by rapid growth and increasing user demand. As the main technology provider for Hollywoodbets, BET Software needed a more resilient and scalable system to support peak betting periods. 

By utilising Elixir to support and transition to a distributed data architecture, we helped BET Software eliminate bottlenecks and ensure seamless service- even during the busiest betting events. 

BET Software Erlang Solutions client case study

Read the BET Software case study in full.

Innovation and competitive edge with International Registries Inc.

The final client case study of this series is with International Registries Inc. (IRI). They are global leaders in maritime and corporate registry services, who were looking to enhance its technological infrastructure and strengthen their competitive advantage.

Erlang Solutions helped IRI by using Elixir to reduce costs, improve system maintainability, and decommission servers. 

International Registries Inc Erlang Solutions

Discover the complete IRI case study. 

Real results from client case studies 

Our client case study examples show how we help companies like TV4, FACEIT, TeleWare, BET Software, and International Registries Inc. solve tough tech challenges and excel in competitive markets. Whether it’s boosting performance, securing communications, or scaling for growth, our solutions unlock new possibilities.

You can explore more Erlang Solutions case studies here.

If you’d like to chat with the Erlang Solutions team about what we can do for you, feel free to drop us a message.

The post Client Case Studies with Erlang Solutions appeared first on Erlang Solutions.

by Erlang Solutions Team at October 17, 2024 09:34

Ignite Realtime Blog

Smack 4.5.0-beta5 released

The Ignite Realtime developer community is happy to announce that Smack 4.5 entered its beta phase. Smack is a XMPP client API written in Java that is able to run on Java SE and Android. Smack’s beta phase started already a few weeks ago, but 4.5.0-beta5 is considered to be a good candidate to announce, as many smaller issues have been ironed out.

With Smack 4.5 we bumped the minimum Java version to 11. Furthermore Smack now requires a minimum Android API of 26 to run.

If you are using Smack 4.4 (or maybe an even older version), then right now is the perfect time to create an experimental branch with Smack 4.5 to ease the transition.

Smack 4.5 APIs is considered stable, however small adjustments are still possible during the beta phase.

8 posts - 3 participants

Read full topic

by Flow at October 17, 2024 07:27

October 14, 2024

Sam Whited

Coffeeneuring 2024

Every year in October I participate in Love To Ride’s (psst, sign up using that link so I can get some extra points when you do your first bike ride!) Biketober competition. The TL;DR is that for every day that you go for a ride, for every mile that you ride, and for every new rider that you invite you get points for your team. At the end of the month winners are announced and there are some fun prizes. It’s not about big miles, it’s about getting more people outside riding bikes, and I love it for that.

This year in addition to Biketober I’ve decided to try the, unrelated, annual Coffeeneuring Challenge. If you’re not familiar with the challenge, the idea is pretty simple: between October 6th and November 18th you ride your bike to 7 different coffee (or other approved beverage) locations, with some minor limits on distance and how often a coffee place qualifies. For a full list of “rules” you can see this years post: “Coffeeneuring Challenge 2024: The Year of Small Wins”.

We’re a little ways into the challenge already, so this post will be how I document my rides for this year going forward! Let’s start with a few I’ve already done. For the purpose of the rules I’ll be starting my weeks on Monday, because that’s just how weeks work and this ridiculous notion that Sunday is both the “weekend” but also somehow the start of the week needs to be abolished.

Week One (Oct 05–06)

Oct 5th

Per rule 2: “Soft Launch Saturday” I completed my first ride of the joint Biketober/Coffeeneuring season! I was on my way back from an IWW meeting at the Little Five Points Community Center and decided that I’d make a detour to stop at Taco Cantina for a Horchata de arroz. Per rule 6 (“Controversial Beverage Rule”) this was adjudicated in the Court of Coffeneuring Public Opinion on fedi and now only awaits a final ruling by the Coffeneuring Challenge Committee and the Intern.

The overall ride was rather uneventful, but also highlights the major mobility gap in Cobb County, GA. The first four or so miles are from my house to the Cumberland Transfer Center which is the only place in the county that you can catch a rapid bus into Atlanta. I’m lucky enough to be close to the bus station, otherwise I would have to catch another bus to get there which, on the weekend, runs about once an hour. Not an ideal situation: trying to catch a transfer involving two buses that both run once an hour. If you bike, as I did, you can follow the Spring Road Trail and the Mountain to River Trail (M2R) all the way to the transfer center. While this is great in theory because it gets you off the 4 lane road where people regularly do 60mph, the road that the trail is next to is really more of a stroad so every retail establishment along the road has cars pulling out or in who aren’t looking for cyclists and it can be a rather harrowing ride.

Once you catch the bus and arrive at Arts Center Station the transit system improves a tiny bit. Transferring to the train is free and requires one transfer to the blue line which will take you to Inman Park/Reynoldstown Station. As I left the station I was surprised to find a small cycle path that I didn’t know about, Poplar Court Northeast, that cuts through the park to Euclid Ave right next to the Community Center!

The total distance round trip came out to 8.18 miles. A partial GPX trace for this ride can be found here.

Oct 6th

The next day I went out for a group ride. To get a few extra miles in this Biketober I decided to ride to Knight Park where the group was meeting at instead of taking the bus. The total distance to the park was 10.26 miles, mostly along Atlanta Road. This may have been one of the stupider cycling decisions I’ve made in a while; in theory there is a bike path being built alongside the road, but there are major gaps with no plans to fill them in as well and no good way to get off the road at points, especially when crossing bridges.

The group ride itself ended up being 10.36 miles, only 0.1 more than the commute, but was a much more enjoyable ride through Westside Park, Atlanta’s largest public park. At the end we stopped off at The Whelan for a bite and a drink! I went with a local Märzen; it’s not a coffee, but Oktoberfest beers seems like good Coffeeneuring drinks to me (though it did not come in the traditional Maß, which is always disappointing). During the winter proper this pub also serves mulled wine, so maybe I’ll stop back by for one of my last rides if they start serving it in time.

A sculpture of the words 'est 1893' with a blue bicycle in front of it

The total ride distance (after checking out an open streets festival and a partial ride home) came out to 24.35 miles. A partial GPX trace for this ride can be found here.

Week Two (Oct 07–13)

Oct 10th

I rode down to meet a friend at Viking Alchemist Meadery, an easy few miles along the M2R trail I mentioned before. I’d never had mead before so the bartender kindly let me try a few samples before deciding what I wanted. It’s not my favorite drink, but a few of the more sour ones were quite nice and a 4 year aged one that I can’t afford (but which he let me take a small sip of) was very nice and it felt like a good Coffeeneuring style drink to me!

I arrived a bit early and they weren’t open yet, so I explored a bit and ended up with a total ride of 5.72 miles. The GPX trace can be found here.

Oct 12th

A few days later the nearby city of Marietta, GA was having their annual Chalktoberfest celebration, two days of street art, beers, and a square that’s actually usable by people, not just cars! I took a ride up the M2R trail in the other direction this time to hand out some fliers encouraging people to vote for the Cobb County Mobility SPLOST which will fund a mix of Bus Rapid Transit (BRT) lines and bicycle path improvements in Cobb. On the way back I stopped at Pie Bar and had my first cup of actual coffee! The city of Smyrna was also having its birthday celebration on the same day so I ended up also going back and forth between home and the Smyrna downtown area several times to hear the various concerts. This year featured “Letters to Cleo” and “The Roots” as well as a number of local bands!

Letters to Cleo getting setup on a band stand in the Smyrna Village Green.

Overall ride length ended up being 18.66 miles and a GPX of the main M2R portion can be found here.

Week 3 (Oct 14-20)

Oct 15th

It’s October 15th and that means it’s the first day of early voting in Georgia! I took a ride over to the Smyrna Community Center to see what the line was like and was pleased to see that a lot more people than I’ve ever seen during early voting were already queued up (~1 hour wait time according to one of the poll workers).

An orange cover showing three floating green platforms with a ladder extending up to the top one with a sun rising above it. The lower two platforms have the words 'Everyday' and 'Utopia' sitting on them, and the bottom says 'What 2,000 year of wild experiments can teach us about the good life'.

Nextdoor to the Community Center is the library, so I stopped by there as well and on a whim picked up a copy of “Everyday Utopia” by Kristen R. Ghodsee and “The Contradictions” by Sophie Yanow. I’ve never heard of Yanow, but the book had a bicycle on the cover so that seemed like a good enough reason to grab it. I’ve had Ghodsee’s “Why Women Have Better Sex Under Socialism: And Other Arguments for Economic Independence” on my “To Read” list for a while, and wasn’t aware that my library had any of her books, so I was excited to learn about “Everyday Utopia”.

Right up the street is Café Lucia so I also stopped for a coffee to complete today’s Coffeeneuring outing. I was pleased to see that they had several fliers with information about the MSPLOST tax on the ballot on their bulletin board that were put out by someone else; sometimes it’s easy to think you’re the only person who cares about public transit, so it’s nice to remember that other people are out there fighting for this as well.

The total ride came out to 3.24 miles, and a GPX is available here.

Oct 16th

My ride today involved several errands on the way to the coffee shop. I first stopped by an auto parts store to drop off some used engine oil from my motorcycle, then biked over to the library to return “The Contradictions” (it was much shorter than I realized and was fun, despite comics not really being my thing).

The line looked shorter at the community center (15 minutes as opposed to the hour it would have taken yesterday) so I went in to cast my vote; the big ticket items for me were Harris for president, of course, but also the previously mentioned MSPLOST and a local Democratic Socialist running in GA HD 42. This is all more-or-less on the way to Rev Coffee where I stopped off to use the wifi and get a little work done. As always, the coffee was excellent, but the bike parking was inadequate and cramped.

A blue bicycle next to a sign that says 'Advance Voting' and has a picture of a U.S. flag. Behind it is a brick building with a short line out front and lots of cones set out to direct the line away from doors.

Oct 19th

Though only two rides per week count as Coffeeneuring rides (per rule 4), I rode up to Marietta for the monthly Cobb4Transit group ride and wanted to mention some of the work they’re doing. One of the things I like about this group ride is that it specifically focuses on riding a mix of safe streets, and streets where bicycle infrastructure could exist but has been neglected. It’s as much an education about the political situation in Marietta as it is about the ride itself. This is another of those group rides where getting to it without a car was longer than the actual ride for me, but we got to stop by one of my favorite coffee shops, Cool Beans afterwards.

A medium sized van with a table next to it that says Cobb County Public Library on it. A child and his mother are checking out a book. The vans side door is open, revealing that it is full of book cases.

As I was getting ready to ride back to Smyrna, a van owned by the Cobb County Library system pulled up and opened up a pop-up library! I had no idea the county library had a bookmobile and was very excited to see them out in the square loaning out books.

The total distance ended up being 20 miles and the GPX trace can be found here.

Week 4 (Oct 21-27)

Oct 24th

This was just a short ride down to Rose Garden Park to pick up a yard sign from Gabriel Sanchez who’s running for state house for district 42. With two weeks to go until election day, the campaign needs all the help it can get so despite the fact that I couldn’t stay for the political canvasing I decided to at least stop by and put up a yard sign. On the way back I stopped off at Rev Coffee again. Since I did three rides last week (and only two can count), we’ll say that last week’s Rev trip was just a bonus ride.

Oct 26th

Today in the Smyrna town square was the annual “Crafts and Drafts” festival. I was populating the Cobb4Transit and Freedom from Traffic table most of the day, so I biked over and got a coffee (in the morning) and a beer in the afternoon. The Smyrna downtown area, on occasions like this when it’s free from cars, makes a great Coffee Shop Without Walls, per rule 3!

Final Push

Oct 30th

I’ve already hit the Coffeeneuring goal of 7 places for the month, but wanted to do one last update to wrap up Biketober and mention my longest ride! Wednesday the 30th I rode the Silver Comet Trail from my home in Smyrna, GA out to the Alabama border near the hamlet of Esom Hill.

I had originally planned on doing my first century into Anniston, AL but after a late start and a few mechanical issues I hadn’t made it very far by lunch time, so I stopped at the Tara Drummond Trailhead near Dallas, GA for a coffee and some lunch. With all the delays it was almost dark by the time I got to the border and I didn’t have much water left so I filled my bottle from a volunteer fire house in Esom Hill and then turned back for some campsites I’d passed earlier in the day near Rockmart, GA called Camp Comet. This put me at around 85 miles for the day instead of my desired 100, but meant I had a guaranteed good place to sleep and the promise of coffee in Rockmart the next morning!

A small green tent with a light on inside that shines through the rain fly next to a blue bicycle with panniers. It is dusk and the light is low and it's all a bit hard to see.

Oct 31st

I stopped off at a trailside shop in the morning to re-fill my water, then went into Rockmart for a coffee. Continuing on back I also stopped off in Powder Springs at Skint Chestnut Brewing Company for lunch and a beer, as well as a different coffee shop in Dallas so I feel like I got my Coffeeneuring fix in this trip!

The total distance ridden over both days ended up being 135.37 miles, and I had a blast camping for the first time in a long time! This year was the 10 year anniversary of my Appalachian Trail thru-hike and due to a mix of financial constraints, prior job constraints, and social constraints I’ve barely done any camping or long distance hiking since then. Even if it was only 2 days it felt great to get outside and do some miles again!

Two stacked stone pillers with a big red metal arch on to that says 'Chief Ladiga'. The bike path runs through the arch and a bicycle sits under it.

Wrap Up

I ended up visiting well over 7 new places this month (though I only blogged about some of them), so I’m well situated for the end of the Coffeeneuring season! For biketober the official results aren’t in yet, but I came in second place for my team with 350 miles covered over the month, and having ridden at least a small ride every day of the month!

A three month calendar showing September with some days highlighted, October with all days highlighted, and November greyed out.

October 14, 2024 15:57

October 10, 2024

Erlang Solutions

Why Open Source Technology is a Smart Choice for Fintech Businesses

Traditionally, the fintech industry relied on proprietary software, with usage and distribution restricted by paid licences. Fintech open-source technologies were distrusted due to security concerns over visible code in complex systems.

But fast-forward to today and financial institutions, including neobanks like Revolut and Monzo, have embraced open source solutions. These banks have built technology stacks on open-source platforms, using new software and innovation to strengthen their competitive edge.

While proprietary software has its role, it faces challenges exemplified by Oracle/Java’s subscription model changes, which have led to significant cost hikes. In contrast, open source Delivers flexibility, scalability, and more control, making it a great choice for fintechs aiming to remain adaptable.

Curious why open source is the smart choice for fintech? Let’s look into how this shift can help future-proof operations, drive innovation, and enhance customer-centric services.

The impact of Oracle Java’s pricing changes

Before we understand why open source is a smart choice for fintech, let’s look at a recent example that highlights the risks of relying on proprietary software—Oracle Java’s subscription model changes.

A change to subscription

Java, known as the “language of business,” has been the top choice for developers and 90% of Fortune 500 companies for over 28 years, due to its stability, performance, and strong Oracle Java community.

In January 2023, Oracle quietly shifted its Java SE subscription model to an employee-based system, charging businesses based on total headcount, not just the number of users. This change alarmed many subscribers and resulted in steep increases in licensing fees. According to Gartner, these changes made operations two to five times more expensive for most organisations.

Fintech open source Java SE universal products

Oracle Java SE Universal Subscription Global Price List (by volume)

Impact on Oracle Java SE user base

By January 2024, many Oracle Java SE subscribers had switched to OpenJDK, the open-source version of Java. Online sentiment towards Oracle has been unfavourable, with many users expressing dissatisfaction in forums. Those who stuck with Oracle are now facing hefty subscription fee increases with little added benefit.

Lessons from Oracle Java SE

For fintech companies, Oracle Java’s pricing changes have highlighted the risks of proprietary software. In particular, there are unexpected cost hikes, less flexibility, and disruptions to critical infrastructure. Open source solutions, on the other hand, give fintech firms more control, reduce vendor lock-in, and allow them to adapt to future changes while keeping costs in check.

The advantages of open source technologies for Fintech

Open source software is gaining attention in financial institutions, thanks to the rise of digital financial services and fintech advancements. 

It is expected to grow by 24% by 2025 and companies that embrace open-source benefit from enhanced security, support for cryptocurrency trading, and a boost to fintech innovation. 

Cost-effectiveness

The cost advantages of open-source software have been a major draw for companies looking to shift from proprietary systems. For fintech companies, open-source reduces operational expenses compared to the unpredictable, high costs of proprietary solutions like Oracle Java SE.

Open source software is often free, allowing fintech startups and established firms to lower development costs and redirect funds to key areas such as compliance, security, and user experience. It also avoids fees like:

  • Multi-user licences
  • Administrative charges
  • Ongoing annual software support charges

These savings help reduce operating expenses while enabling investment in valuable services like user training, ongoing support, and customised development, driving growth and efficiency.

A solution to big tech monopolies

Monopolies in tech, particularly in fintech, are increasing. As reported by CB Insights, about 80% of global payment transactions are controlled by just a few major players. These monopolies stifle innovation and drive up costs.

Open-source software decentralises development, preventing any single entity from holding total control. It offers fintech companies an alternative to proprietary systems, reducing reliance on monopolistic players and fostering healthy competition. Open-source models promote transparency, innovation, and lower costs, helping create more inclusive and competitive systems.

Transparent and secure solutions

Security concerns have been a major roadblock that causes companies and startups to hesitate in adopting open-source software.

A common myth about open source is that its public code makes it insecure. But, open-source benefits from transparency, as it allows for continuous public scrutiny. Security flaws are discovered and addressed quickly by the community, unlike proprietary software, where vulnerabilities may remain hidden.

An example is Vocalink, which powers real-time global payment systems. Vocalink uses Erlang, an open-source language designed for high-availability systems, ensuring secure, scalable payment handling. The transparency of open source allows businesses to audit security, ensure compliance, and quickly implement fixes, leading to more secure fintech infrastructure.

Ongoing community support

Beyond security, open source benefits from vibrant communities of developers and users who share knowledge and collaborate to enhance software. This fosters innovation and accelerates development, allowing for faster adaptation to trends or market demands.

Since the code is open, fintech firms can build custom solutions, which can be contributed back to the community for others to use. The rapid pace of innovation within these communities helps keep the software relevant and adaptable.

Interoperability

Interoperability is a game-changer for open-source solutions in financial institutions, allowing for the seamless integration of diverse applications and systems- essential for financial services with complex tech stacks. 

By adopting open standards (publicly accessible guidelines ensuring compatibility), financial institutions can eliminate costly manual integrations and enable plug-and-play functionality. This enhances agility, allowing institutions to adopt the best applications without being tied to a single vendor.

A notable example is NatWest’s Backplane, an open-source interoperability solution built on FDC3 standards. Backplane enables customers and fintechs to integrate their desktop apps with various banking and fintech applications, enhancing the financial desktop experience. This approach fosters innovation, saves time and resources, and creates a more flexible, customer-centric ecosystem.

Future-proofing for longevity

Open-source software has long-term viability. Since the source code is accessible, even if the original team disbands, other organisations, developers or the community at large can maintain and update the software. This ensures the software remains usable and up-to-date, preventing reliance on unsupported tools.

Open Source powering Fintech trends

According to the latest study by McKinsey and Company, Artificial Intelligence (AI), machine learning (ML), blockchain technology, and hyper-personalisation will be among some of the key technologies driving financial services in the next decade. 

Open-source platforms will play a key role in supporting and accelerating these developments, making them more accessible and innovative.

AI and fintech innovation

  • Cost-effective AI/ML: Open-source AI frameworks like TensorFlow, PyTorch, and Scikit-learn enable startups to prototype and deploy AI models affordably, with the flexibility to scale as they grow. This democratisation of AI allows smaller players to compete with larger firms.
  • Fraud detection and personalisation: AI-powered fraud detection and personalised services are central to fintech innovation. Open-source AI libraries help companies like Stripe and PayPal detect fraudulent transactions by analysing patterns, while AI enables dynamic pricing and custom loan offers based on user behaviour.
  • Efficient operations: AI streamlines back-office tasks through automation, knowledge graphs, and natural language processing (NLP), improving fraud detection and overall operational efficiency.
  • Privacy-aware AI: Emerging technologies like federated learning and encryption tools help keep sensitive data secure, for rapid AI innovation while ensuring privacy and compliance.

Blockchain and fintech 

Open-source blockchain platforms allow fintech startups to innovate without the hefty cost of proprietary systems:

  • Open-source blockchain platforms: Platforms like Ethereum, Bitcoin Core, and Hyperledger are decentralising finance, providing transparency, reducing reliance on intermediaries, and reshaping financial services.
  • Decentralised finance (DeFi):  DeFi is projected to see an impressive rise, with P2P lending growing from $43.16 billion in 2018 to an estimated $567.3 billion by 2026. Platforms like Uniswap and Aave, built on Ethereum, are pioneering decentralised lending and asset management, offering an alternative to traditional banking. By 2023, Ethereum alone locked $23 billion in DeFi assets, proving its growing influence in the fintech space. Enterprise blockchain solutions: Open source frameworks like Hyperledger Fabric and Corda are enabling enterprises to develop private, permissioned blockchain solutions, enhancing security and scalability across industries, including finance.

Cost-effective innovation: Startups leveraging open-source blockchain technologies can build innovative financial services while keeping costs low, helping them compete effectively with traditional financial institutions.

Hyper-personalisation

Hyper-personalisation is another key trend in fintech, with AI and open-source technologies enabling companies to create highly tailored financial products. This shift moves away from the traditional “one-size-fits-all” model, helping fintechs solve niche customer challenges and deliver more precise services.

Consumer demand for personalisation

A Salesforce survey found that 65% of consumers expect businesses to personalise their services, while 86% are willing to share data to receive more customised experiences.

Salesforce survey fintech open source businesses

source- State of the connected customer

The expectation for personalised services is shaping how financial institutions approach customer engagement and product development.

Real-world examples of open-source fintech

Companies like Robinhood and Chime leverage open-source tools to analyse user data and create personalised financial recommendations. These platforms use technologies like Apache Kafka and Apache Spark to process real-time data, improving the accuracy and relevance of their personalised offerings-from customised investment options to tailored loan products.

Implementing hyper-personalisation lets fintech companies strengthen customer relationships, boost retention, and increase deposits. By leveraging real-time, data-driven technologies, they can offer highly relevant products that foster customer loyalty and maximise value throughout the customer lifecycle. With the scalability and flexibility of open-source solutions, companies can provide precise, cost-effective personalised services, positioning themselves for success in a competitive market.

Erlang and Elixir: Open Source solutions for fintech applications

Released as open-source in 1998, Erlang has become essential for fintech companies that need scalable, high-concurrency, and fault-tolerant systems. Its open-source nature, combined with the capabilities of Elixir (which builds on Erlang’s robust architecture), enables fintech firms to innovate without relying on proprietary software, providing the flexibility to develop custom and efficient solutions.

Both Erlang and Elixir’s architecture are designed to ensure potentially zero downtime, making them well-suited for real-time financial transactions.

Why Erlang and Elixir are ideal for Fintech:

  • Reliability: Erlang’s and Elixir’s design ensures that applications continue to function smoothly even during hardware or network failures, crucial for financial services that operate 24/7, guaranteeing uninterrupted service. Elixir inherits Erlang’s reliability while providing a more modern syntax for development.
  • Scalability: Erlang and Elixir can handle thousands of concurrent processes, making them perfect for fintech companies looking to scale quickly, especially when dealing with growing data volumes and transactions. Elixir enhances Erlang’s scalability with modern tooling and enhanced performance for certain types of workloads.
  • Fault tolerance: Built-in error detection and recovery features ensure that unexpected failures are managed with minimal disruption. This is vital for fintech applications, where downtime can lead to significant financial losses. Erlang’s auto restoration philosophy and Elixir’s features enable 100% availability and no transaction is lost.
  • Concurrency & distribution: Both Erlang and Elixir excel at managing multiple concurrent processes across distributed systems. This makes them ideal for fintechs with global operations that require real-time data processing across various locations.

Open-source fintech use cases

Several leading fintech companies have already used Erlang to build scalable, reliable systems that support their complex operations and real-time transactions.

  • Klarna: This major European fintech relies on Erlang to manage real-time e-commerce payment solutions, where scalability and reliability are critical for managing millions of transactions daily.
  • Goldman Sachs: Erlang is utilised in Goldman Sachs’ high-frequency trading platform, allowing for ultra-low latency and real-time processing essential for responding to market conditions in microseconds.
  • Kivra: Erlang/ Elixir supports Kivra’s backend services, managing secure digital communications for millions of users, and ensuring constant uptime and data security.

Erlang and Elixir -supporting future fintech trends

The features of Erlang and Elixir align well with emerging fintech trends:

  • DeFi and Decentralised Applications (dApps): With the growth of decentralised finance (DeFi), Erlang’s and Elixir’s fault tolerance and real-time scalability make them ideal for building dApps that require secure, distributed networks capable of handling large transaction volumes without failure.
  • Hyperpersonalisation: As demand for hyperpersonalised financial services grows, Erlang and Elixir’s ability to process vast amounts of real-time data across users simultaneously makes them vital for delivering tailored, data-driven experiences.
  • Open banking: Erlang and Elixir’s concurrency support enables fintechs to build seamless, scalable platforms in the open banking era, where various financial systems must interact across multiple applications and services to provide integrated solutions.

Erlang and Elixir can handle thousands of real-time transactions with zero downtime making them well-suited for trends like DeFi, hyperpersonalisation, and open banking. Their flexibility and active developer community ensure that fintechs can innovate without being locked into costly proprietary software.

To conclude

Fintech businesses are navigating an increasingly complex and competitive landscape where traditional solutions no longer provide a competitive edge. If you’re a company still reliant on proprietary software, ask yourself: Is your system equipped to expect the unexpected? Can your existing solutions keep up with market demands? 

Open-source technologies offer a solution to these challenges. Fintech firms can reduce costs, improve security, and, most importantly, innovate and scale according to their needs. Whether by reducing vendor lock-ins, tapping into a vibrant developer community, or leveraging customisation, open-source software is set to transform the fintech experience, providing the tools necessary to stay ahead in a digital-first world. If you’re interested in exploring how open-source solutions like Erlang or Elixir can help future-proof your fintech systems, contact the Erlang Solutions team.

The post Why Open Source Technology is a Smart Choice for Fintech Businesses appeared first on Erlang Solutions.

by Erlang Solutions Team at October 10, 2024 09:40

XMPP Interop Testing

Incoming: Improvements!

A new boost in the project’s budget will allow us to approximately double the test coverage of our project (and add a couple of nice features)!

Much of the XMPP Interop Testing project was made possible as the work was funded through the NGI0 Core Fund. This is a fund established by NLnet with financial support from the European Commission’s Next Generation Internet programme.

It is quite remarkable how far the effects of funding reach: it allowed us to work out our plans to take various, pre-existing bits and bobs, and quickly and efficiently turn a small tool used for internal testing to a proper testing framework for any XMPP server implementation to be able to use. That snowballed in bug fixes for server implementations, and improvements to specifications used by many. A relatively small fund thus improved the quality of open standard-based communication used in one shape or another by countless people, daily!

We are so happy and grateful to NLnet for boosting our project’s grant! With the additional work, we will add the following improvements:

  • Have better test coverage by writing more tests;
  • Improve feedback when tests fail or do not run at all;
  • Add a new test account provisioning option;
  • Improve test selection configuration;
  • Automate recurring maintenance tasks;
  • Add support for other build systems.

This all will help us improve our framework, it will help our users to improve their products, and will allow new projects to more easily deploy our open and free solutions into their CI pipelines!

You can expect a lot of these improvements to become available to you, soon!

by Guus der Kinderen at October 10, 2024 09:10

October 08, 2024

ProcessOne

Easily Monitor Push Notifications with Fluux ejabberd Managed Service

Good news! You can now monitor your push notifications directly through your fluux.io console, offering better visibility into the status and delivery of messages to your users.

How to Access Push Logs

To start, navigate to the Push Apps list in your Fluux.io dashboard (see WebPush support for instructions). From there, you can access the Push Logs section, which provides detailed insights into the push notifications sent from your server.

alt

Tracking Push Notification Status

Once a push notification is triggered, you can check its status through the push gateway. This feature offers critical information about your users’ devices and the delivery process, enabling you to troubleshoot common issues more efficiently. For example, you may encounter:

  • BadToken: This indicates that something went wrong during the device registration process, rendering the token invalid.
  • Token Gone: This means the device’s registration has been canceled by the push provider, for example, because the app has been uninstalled, the device replaced or the token rotated.

In both scenarios, your client application will need to re-register the user’s device to ensure successful notifications going forward.

alt

Debugging Delivery to User Devices

In addition to error tracking, you’ll also be able to see when a push has been successfully delivered to the gateway. If you’re working with Apple’s push notifications, APNs sandbox entries will include an apns-uniq-id. This identifier allows you to further trace the delivery of messages to the user’s device via the Apple Cloud dashboard -— an handy tool for those deep-dive debugging sessions.

by Sébastien Luquet at October 08, 2024 13:22

Easily Monitor Push Notifications with Fluux ejabberd Managed Service

Good news! You can now monitor your push notifications directly through your fluux.io console, offering better visibility into the status and delivery of messages to your users.

How to Access Push Logs

To start, navigate to the Push Apps list in your Fluux.io dashboard (see WebPush support for instructions). From there, you can access the Push Logs section, which provides detailed insights into the push notifications sent from your server.

alt

Tracking Push Notification Status

Once a push notification is triggered, you can check its status through the push gateway. This feature offers critical information about your users’ devices and the delivery process, enabling you to troubleshoot common issues more efficiently. For example, you may encounter:

  • BadToken: This indicates that something went wrong during the device registration process, rendering the token invalid.
  • Token Gone: This means the device’s registration has been canceled by the push provider, for example, because the app has been uninstalled, the device replaced or the token rotated.

In both scenarios, your client application will need to re-register the user’s device to ensure successful notifications going forward.

alt

Debugging Delivery to User Devices

In addition to error tracking, you’ll also be able to see when a push has been successfully delivered to the gateway. If you’re working with Apple’s push notifications, APNs sandbox entries will include an apns-uniq-id. This identifier allows you to further trace the delivery of messages to the user’s device via the Apple Cloud dashboard -— an handy tool for those deep-dive debugging sessions.

by Sébastien Luquet at October 08, 2024 13:22

October 04, 2024

The XMPP Standards Foundation

The XMPP Newsletter September 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of September 2024.

XSF Announcements

If you are interested in joining the XMPP Standards Foundation as a member, please apply until November 24th, 2024!.

The XMPP Standards Foundation is also calling for XSF Board 2024 and XSF Council 2024. Be involved in the XMPP Standards Foundation organisation decisions as well as on our specifications we publish. If you are interested in running for Board or Council, please add a wiki page about your candidacy to one or both of the following sections until November 3rd, 2024, 00:00 UTC. Note: XMPP Council members must be elected members of the XSF; however, there is no such restriction for the Board of Directors.

XMPP and Google Summer of Code 2024

The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and have kicked-off with coding:

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • Berlin XMPP Meetup (DE / EN): monthly meeting of XMPP enthusiasts in Berlin, every 2nd Wednesday of the month at 6pm local time
  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

Videos

  • Detailed and comprehensive introduction to Rivista XJP: the XMPP PubSub Content Management System.

XMPP Articles

XMPP Software News

XMPP Clients and Applications

  • Cheogram has released version 2.15.3-4 for Android.
  • Conversations has released version 2.16.7 for Android.
  • Psi+ 1.5.2041 installer has been released.
  • Gajim 1.9.4 and 1.9.5 have been released. These releases come with integrated support for the XMPP Providers project. Furthermore, there is now support for “Hats” (XEP-0317), which allow you to assign roles to group chat participants, i.e. “Support”, “Expert” or really anything you like to assign. Last but not least, Gajim’s Microsoft Store release has been improved in many ways. You can check the changelog for more details.
  • Movim 0.28 has been released. This new version (code named “Tempel”) brings a “Freshly redesigned Search panel, improved account gateways and administration features, databases fixes and a new call flow and conference lobby” among many other fixes and improvements.
Movim 0.28 (Tempel) Introducing the new call flow and conference lobby

Movim 0.28 (Tempel) Introducing the new call flow and conference lobby

XMPP Servers

XMPP Libraries & Tools

Ignite Realtime community:

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

New

  • Version 0.1.0 of XEP-0493 (OAuth Client Login)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0494 (Client Access Management)
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 2.13.2 of XEP-0004 (Data Forms)
    • Add section on empty and absent values. (gk)
  • Version 1.35.1 of XEP-0045 (Multi-User Chat)
    • Add explicit error definition when non-owners attempt to use owner-specific functionality. (gk)
  • Version 1.3.1 of XEP-0133 (Service Administration)
    • Fixed typo in example for Get User Last Login Time (dc)
  • Version 0.4.2 of XEP-0264 (Jingle Content Thumbnails)
    • Restrict ‘width’ and ‘height’ to the 0..65535 range, instead of being unbounded integers. This is in accordance to XEP-0084 and XEP-0221 for instance. (egp)
  • Version 0.2.0 of XEP-0272 (Multiparty Jingle (Muji))
    • Send Jingle IQs to real JID
    • Define how to use with XEP-0482
    • Adjust namespace (lmw)
  • Version 1.1.2 of XEP-0313 (Message Archive Management)
    • Fix JID and affiliation of the first two witches in the MUC example.
    • Fix duplicated ‘id’ in MUC example.
    • Fix indentation in examples. (egp)
  • Version 0.3.1 of XEP-0474 (SASL SCRAM Downgrade Protection)
    • Fix typos
    • Adapt attack-model section to new simplified protocol (tm)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable

  • No XEP moved to Stable this month.

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Schimon Zachary, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: Gonzalo Raúl Nemmi
  • German: xmpp.org
    • Translators: Millesimus

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

To unsubscribe from this list, please log in first. If you have not previously logged in, you may need to set up an account with the appropriate email address.

License

This newsletter is published under CC BY-SA license.

October 04, 2024 00:00

October 03, 2024

Erlang Solutions

Why do systems fail? Tandem NonStop system and fault tolerance

If you’re an Elixir, Gleam, or Erlang developer, you’ve probably heard about the capabilities of the BEAM virtual machine, such as concurrency, distribution, and fault tolerance. Fault tolerance was one of the biggest concerns of Tandem Computers. They created their Tandem Non-Stop architecture for high availability in their systems, which included ATMs and mainframes.

In this post, I’ll be sharing the fundamentals of the NonStop architecture design with you. Their approach to achieving high availability in the presence of failures is similar to some implementations in the Erlang Virtual Machine, as both rely on concepts of processes and modularity.

Systems with High Availability

Why do systems fail? This question should probably be asked more often, considering all the factors it involves. It was central to the NonStop architecture because achieving high availability depends on understanding system failures. 

For tandem systems, any system has critical components that could potentially cause failures. How often do you ask yourself how long can your system operate before a failure? There is a metric known as MTBF (mean time between failures), which is calculated by dividing the total operating hours of the system by the number of failures. The result represents the hours of uninterrupted operation.

Many factors can affect the MTBF, including administration, configuration, maintenance, power outages, hardware failures, and more. So, how can you survive these eventualities to achieve at least virtual high availability in your systems?

Tandem NonStop critical components

High availability in hardware has taught us important insights about continuous operation. Some hardware implementations rely on decomposing the system into modules, allowing for modularity to contain failures and maintain operation through backup modules instead of breaking the whole system and needing to restart it. The main concept, from this point of view, is to use modules as units of failure and replacement.

Tandem NonStop system in modules

High Availability for Software Systems

But what about the software’s high availability? Just as with hardware, we can find important lessons from operative system designers who decompose systems into modules as units of service. This approach provides a mechanism for having a unit of protection and fault containment. 

To achieve fault tolerance in software, it’s important to address similar insights from the NonStop design:

  • Modularity through processes and messages.
  • Fault containment.
  • Process pairs for fault tolerance.
  • Data integrity.

Can you recognise some similarities so far?

The NonStop architecture essentially relies on these concepts. The key to high availability, as I mentioned before, is modularity as a unit of service failure and protection.

A process should have a fail-fast mechanism, meaning it should be able to detect a failure during its operation, send a failure signal and then stop its operation. In this way, a system can achieve fault detection through fault containment and by sharing no state. 

Tandem NonStop primary backup

Another important consideration for your system is how long it takes to recover from a failure. Jim Gray, software designer and researcher at Tandem Computers, in his paper ”Why computers stop and what can be done about it?” proposed a model of failure affected by two kinds of bugs: Bohrbugs, which cause critical failures during operation, and Heisenbugs, which are more soft and can persist in the system for years. 

Implementing Processes-Pairs Strategies

The previous categorisation helps us to understand better strategies for implementing processes-pairs design, based on a primary process and a backup process:

  • Lockstep: Primary and backup processes execute the same task, so if the primary fails, the backup continues the execution. This is good for hardware failures, but in the presence of Heisenbugs, both processes will remain the failure. 
  • State checkpointing: A requestor entity is connected to a processes-pair. When the primary process stops operation, the requestor switches to the backup process. You need to design the requestor logic. 
  • Automatic checkpointing: Similar to the previous, but using the kernel to manage the checkpointing.
  • Delta checkpointing: Similar to state checkpointing but using logical rather than physical updates.
  • Persistence: When the primary process fails, the backup process starts its operation without a state. The system must implement a way to synchronise all the modules and avoid corrupt interaction.
Tandem NonStop processes pairs

All of these insights are drawn from Jim Gray’s paper, written in 1985 and referenced in Joe Armstrong’s 2003 thesis, “Making Reliable Distributed Systems in the presence of software errors”. Joe emphasised the importance of the Tandem NonStop system design as an inspiration for the OTP design principles. 

Elixir and High Availability

So if you’re a software developer learning Elixir, you’ll probably be amazed by all the capabilities and great tooling available to build software systems. By leveraging frameworks like Phoenix and toolkits such as Ecto, you can build full-stack systems in Elixir. However, to fully harness the power of the Erlang virtual machine (BEAM) you must understand processes. 

Just as the Tandem computer system relied on transactions, fault containment and a fail-fast mechanism, Erlang achieves high availability through processes. Both systems consider it important to modularise systems into units of service and failure: processes. 

About the process

A process is the basic unit of abstraction in Erlang, a crucial concept because the Erlang virtual machine (BEAM) operates around this. Elixir and Gleam share the same virtual machine, which is why this concept is important for the entire ecosystem. 

A process is:

  • A strongly isolated entity.
  • Creation and destruction is a lightweight operation.
  • Message passing is the only way to interact with processes.
  • Share no state.
  • Do what they are supposed to do or fail.

Just remember, these are the fundamentals of Erlang, which is considered a message-oriented language, and its virtual machine (BEAM), on which Elixir runs.  

Tandem NonStop BEAM

If you want to read more about processes in Elixir I recommend reading this article I wrote: Understanding Processes for Elixir Developers.

I consider it important to read papers like Jim Gray’s article because they teach us the history behind implementations that attempt to solve problems. I find it interesting to read and share these insights with the community because it’s crucial to understand the context behind the tools we use. Recognising that implementations exist for a reason and have stories behind them is essential.

 You can find many similarities between Tandem and Erlang design principles:

  •  Both aim to achieve high availability.
  •  Isolation of operations is extremely important to contain failure.
  •  Processes that share no state are crucial for building modular systems.
  •  Process interactions are key to maintaining operation in the presence of errors. While Tandem computers implemented process-pairs design, Erlang implemented OTP patterns.

To conclude

Take some time to read about the Tandem computer design. It’s interesting because these features share significant similarities with OTP design principles for achieving high availability. Failure is something we need to deal with in any kind of system, and it’s important to be aware of the reasons and know what you can do to manage it and continue your operation. This is crucial for any software developer, but if you’re an Elixir developer, you’ll probably dive deeper into how processes work and how to start designing components with them and OTP.

Thanks for reading about the Tandem NonStop system. If you like this kind of content, I’d appreciate it if you shared it with your community or teammates. You can visit this public repository on GitHub where I’m adding my graphic recordings and insights related to the Erlang ecosystem or contact the Erlang Solutions team to chat more about Erlang and Elixir.

Tandem NonStop Joe Armstrong

Illustrations by Visual Partner-Ship @visual_partner 

Jaguares, ESL Americas Office 

@carlogilmar

The post Why do systems fail? Tandem NonStop system and fault tolerance appeared first on Erlang Solutions.

by Carlo Gilmar at October 03, 2024 12:02

October 02, 2024

Ignite Realtime Blog

XMPP: The Protocol for Open, Extensible Instant Messaging

Introduction to XMPP

XMPP, the Extensible Messaging and Presence Protocol, is an Instant Messaging (IM) standard of the Internet Engineering Task Force (IETF) - the same organization that standardized Email (POP/IMAP/SMTP) and the World Wide Web (HTTP) protocols. XMPP evolved out of the early XML streaming technology developed by the XMPP Open Source community and is now the leading protocol for exchanging real-time structured data. XMPP can be used to stream virtually any XML data between individuals or applications, making it a perfect choice for applications such as IM.

A Brief History

IM has a long history, existing in various forms on computers as soon as they were attached to networks. Most IM systems were designed in isolation using closed networks and/or proprietary protocols, meaning each system can only exchange messages with users on the same IM network. Users on different IM networks often can’t send or receive messages, or do so with drastically reduced features because the messages must be transported through “gateways” that use a least common denominator approach to message translation.

The problem of isolated, proprietary networks in IM systems today is similar to email systems in the early days of computer networks. Fortunately for email, the IETF created early standards defining the protocols and data formats that should be used to exchange email. Email software vendors rapidly switched to the IETF standards to provide universal exchange of email among all email users on the Internet.

In 2004 the IETF published RFC 3920 and 3921 (the “Core” and “Instant Messaging and Presence” specifications for instant messaging) officially adding XMPP, mostly known as Jabber at the time, to the list of Internet standards. A year later, Google introduced Google Talk, a service that uses XMPP as its underlying protocol.

Google’s endorsement of the XMPP protocol greatly increased the visibility and popularity of XMPP and helped pave the way for XMPP to become the Internet IM standard. Over the years, more and more XMPP-based solutions followed: from Whatsapp, Jitsi, Zoom and Grinder in the IM-sphere, Google Cloud Print, Firebase Cloud Messaging and Logitec’s Harmony Hub in the IoT-realm, to Nintendo Switch, Fortnite and League of Legends in the world of gaming.

XMPP: Open, Extensible, XML Instant Messaging

The XMPP protocol benefits from three primary features that appeal to administrators, end users and developers: an IETF open standard, XML data format, and simple extensions to the core protocol. These benefits combine to position XMPP as the most compelling IM protocol available for businesses, consumers, and organizations of any size.

Open Standard Benefits

The fact that XMPP is an open standard has led to its adoption by numerous software projects that cover a broad range of environments and users. This has helped improve the overall design of the protocol, as well as ensured a “best of breed” market of client applications and libraries that work with all XMPP servers. The vibrant XMPP software marketplace contains 90+ compatible clients that operate on all standard desktop systems and mobile devices, from mobile phones to tablets.

Wide adoption has provided real-world proof that XMPP-based software from different vendors, deployed by both large and small organizations, can work together seamlessly. For example, XMPP users logged into their personal home server and an employee logged into a corporate IM server can chat, see each other’s presence on their contact lists, and participate in chat rooms hosted on an Openfire XMPP server running at a university.

XML Data

XML is one of the most popular, robust data exchange formats in use today and has become a standard part of most software systems. As a well-matured protocol, XMPP uses the XML data format to transport data over standard TCP/IP sockets and websockets, making the protocol and its data easy to use and understand. Any developer familiar with XML can immediately work with XMPP as no special data format or other proprietary knowledge is needed. Existing tools for creating, reading, editing, and validating XML data can all be used with XMPP without significant modification. The XML foundation of XMPP greatly simplifies integration with existing environments and eases the movement of data to and from the XMPP network.

Extending XMPP

The extensible nature of XML provides much of the extension support built into XMPP. Through the use of XML namespaces, the XMPP protocol can be easily used to transport custom data in addition to standard IM messages and presence information. Software developers and companies interested in the real-time exchange of data are using XMPP as an alternative to custom data transport systems.

The XMPP community publishes standard extensions called XMPP Enhancement Proposals (XEPs) through the XMPP Software Foundation (XSF). The XSF’s volunteer-driven process provides a way for companies creating innovative extensions and enhancements to the XMPP protocol to work together to create standard improvements that all XMPP users benefit from. There are well over 400 XEPs today covering a wide range of functionality, including security enhancements, user experience improvements and VoIP and video conferencing. XEPs allow the XMPP protocol to rapidly evolve and improve in an open, standards-based way.

XMPP Networks Explained

An XMPP network is composed of all the XMPP clients and servers that can reach each other on a single computer network. The biggest XMPP network is available on the Internet and connects public XMPP servers. However, people are free to create private XMPP networks within a single company’s internal LAN, on secure corporate virtual private networks, or even within a private network running in a person’s home. Within each XMPP network, each user is assigned a unique XMPP address.

Addresses - Just Like Email

XMPP addresses look exactly the same as email addresses, containing a user name and a domain name. For example, sales@acme.com is a valid XMPP address for a user account named “sales” in the acme.com domain. It is common for an organization to issue the same XMPP address and email address to a user. Within the XMPP server, user accounts are frequently authenticated against the same common user account system used by the email system.

XMPP addresses are generated and issued in the same way that email addresses are. Each XMPP domain is managed by the domain owner, and the XMPP server for that domain is used to create, edit, and delete user accounts. For example, the acme.com server is used to manage user accounts that end with @acme.com. If a company runs the acme.com server, the company sets its own policies and uses its own software to manage user accounts. If the domain is a hosted account on an Internet Service Provider (ISP) the ISP usually provides a web control panel to easily manage XMPP user accounts in the same way that email accounts are managed. The flexibility and control that the XMPP network provides is a major benefit of XMPP IM systems over proprietary public IM systems like Whatsapp, Telegram and Signal, where all user accounts are hosted by a third party.

Server Federation

XMPP is designed using a federated, client-server architecture. Server federation is a common means of spreading resource usage and control between Internet services. In a federated architecture, each server is responsible for controlling all activities within its own domain and works cooperatively with servers in other domains as equal peers.

In XMPP, each client connects to the server that controls its XMPP domain. This server is responsible for authentication, message delivery and maintaining presence information for all users within the domain. If a user needs to send an instant message to a user outside of their own domain, their server contacts the external server that controls the “foreign” XMPP domain and forwards the message to that XMPP server. The foreign XMPP server takes care of delivering the message to the intended recipient within its domain. This same server-to-server model applies to all cross-domain data exchanges, including presence information.

XMPP server federation is modeled after the design of Internet email, which has shown that the design scales to include the entire Internet and provides the necessary flexibility and control to meet the needs of individual domains. Each XMPP domain can define the level of security, quality of service, and manageability that make sense for their organization.

Conclusion

XMPP is open, flexible and extensible, making it the protocol of choice for real-time communications over the Internet. It enables the reliable transport of any structured XML data between individuals or applications. Numerous mission-critical business applications use XMPP, including chat and IM, network management and financial trading. With inherent security features and support for cross-domain server federation, XMPP is more than able to meet the needs of the most demanding environments.

2 posts - 2 participants

Read full topic

by guus at October 02, 2024 09:56