Planet Jabber

December 19, 2024

ProcessOne

ejabberd 24.12

ejabberd 24.12

Here comes ejabberd 24.12, including a few improvements and bug fixes. This release comes a month and half after 24.10, with around 60 commits to the core repository alongside a few updates in dependencies.

Release Highlights:

Among them, the evacuate_kindly command is a new tool which gave the funny codename to this release. It lets you stop and rerun ejabberd without letting users reconnect to let you perform your maintenance task peacefully. So, this is not an emergency exit from ejabberd, but instead testimony that this releasing is paving the way for a lot of new cool stuff in 2025.

In the meantime, we wish you a Merry Christmas and a Happy New Year!

Other contents:

If you are upgrading from a previous version, there are no required changes in the SQL schemas, configuration or hooks. There are some Commands API v3.

Below is a detailed breakdown of the improvements and enhancements:

XEP-0484: Fast Authentication Streamlining Tokens

We added support for XEP-0484: Fast Authentication Streamlining Tokens. This allows clients to request time limited tokens from servers, which then can be later used for faster authentication by requiring less round trips. To enable this feature, you need to add mod_auth_fast module in modules section.

Deprecation schedule for Erlang/OTP older than 25.0

It is expected that around April 2025, GitHub Actions will remove Ubuntu 20 and it will not be possible to run automatically dynamic tests for ejabberd using Erlang/OTP older than 25.0.

For that reason, the planned schedule is:

  • ejabberd 24.12

    • Usage of Erlang/OTP older than 25.0 is still supported, but discouraged
    • Anybody still using Erlang 24.3 down to 20.0 is encouraged to upgrade to a newer version. Erlang/OTP 25.0 and higher are supported. For instance, Erlang/OTP 26.3 is used for the binary installers and container images.
  • ejabberd 25.01 (or later)

    • Support for Erlang/OTP older than 25.0 is deprecated
    • Erlang requirement softly increased in configure.ac
    • Announce: no warranty ejabberd can compile, start or pass the Common Tests suite using Erlang/OTP older than 25.0
    • Provide instructions for anybody to manually re-enable it and run the tests
  • ejabberd 25.01+1 (or later)

    • Support for Erlang/OTP older than 25.0 is removed completely in the source code

Commands API v3

This ejabberd 24.12 release introduces ejabberd Commands API v3 because some commands have changed arguments and result formatting. You can continue using API v2; or you can update your API client to use API v3. Check the API Versions History.

Some commands that accepted accounts or rooms as arguments, or returned JIDs, have changed their arguments and results names and format to be consistent with the other commands:

  • Arguments that refer to a user account are now named user and host
  • Arguments that refer to a MUC room are now named room and service
  • As seen, each argument is now only the local or server part, not the JID
  • On the other hand, results that refer to user account or MUC room are now the JID

In practice, the commands that change in API v3 are:

If you want to update ejabberd to 24.12, but prefer to continue using an old API version with mod_http_api, you can set this new option:

modules:
  mod_http_api:
    default_version: 2

Improvements in commands

There are a few improvements in some commands:

  • create_rooms_file: Improved, now it supports vhosts with different config
  • evacuate_kindly: New command to kick users and prevent login (#4309)
  • join_cluster: Improved explanation: this returns immediately (since 5a34020, 24.06)
  • mod_muc_admin: Renamed arguments name to room for consistency, with backwards support (no need to update API clients)

Use non-standard STUN port

STUN via UDP can easily be abused for reflection/amplification DDoS attacks. Suggest a non-standard port to make it harder for attackers to discover the service in ejabberd.yml.example.

Modern XMPP clients discover the port via XEP-0215, so there&aposs no advantage in sticking to the standard port.

Disable the systemd watchdog by default

Some users reported ejabberd being restarted by systemd due to missing watchdog pings despite the actual service operating just fine. So far, we weren&apost able to track down the issue, so we&aposll no longer enable the watchdog in our example service unit.

Define macro as environment variable

ejabberd allows you to define macros in the configuration file since version 13.10. This allows to define a value once at the beginning of the configuration file, and use that macro to setup options values several times during the file.

Now it is possible to define the macro value as an environment variable. The environment variable name should be EJABBERD_MACRO_ + macro name.

For example, if you configured in ejabberd.yml:

define_macro:
  LOGLEVEL: 4

loglevel: LOGLEVEL

Now you can define (and overwrite) that macro definition when starting ejabberd. For example, if starting ejabberd in interactive mode:

EJABBERD_MACRO_LOGLEVEL=5 make relive

This is specially useful when using containers with slightly different values (different host, different port numbers...): instead of having a different configuration file for each container, now you can use a macro in your custom configuration file, and define different macro values as environment variable when starting each container. See some examples usages in CONTAINER&aposs composer examples

Elixir modules for authentication

ejabberd modules can be written in the Elixir programming language since ejabberd 15.02. And now, ejabberd authentication methods can also be written in Elixir!

This means you can write a custom authentication method in Erlang or in Elixir, or write an external authentication script in any language you want.

There&aposs an example authentication method in the lib/ directory. Place your custom authentication method in that directory, compile ejabberd, and configure it in ejabberd.yml:

auth_method: &aposEjabberd.Auth.Example&apos

For consistency with that file naming scheme, the old mod_presence_demo.ex has been renamed to mod_example.ex. Other minor changes were done on the Elixir example code.

Redis now supports Unix Domain Socket

Support for Unix Domain Socket was added to listener&aposs port option in ejabberd 20.07. And more recently, ejabberd 24.06 added support in sql_server when using MySQL or PostgreSQL.
That feature is useful to improve performance and security when those programs are running on the same machine as ejabberd.

Now the redis_server option also supports Unix Domain Socket.

The syntax is similar to the other options, simply setup unix: followed with the full path to the socket file. For example:

redis_server: "unix:/var/run/redis/redis.socket"

Additionally, we took the opportunity to update from the wooga/eredis erlang library which hasn&apost been updated in the last six years, to the Nordix/eredis fork which is actively maintained.

New evacuate_kindly command

ejabberd has nowadays around 180 commands to perform many administrative tasks. Let&aposs review some of their usage cases:

  • Did you modify the configuration file? Reload the configuration file and apply its changes

  • Did you apply some patch to ejabberd source code? Compile and install it, and then update the module binary in memory

  • Did you update ejabberd-contrib specs, or improved your custom module in .ejabberd-module? Call module_upgrade to compile and upgrade it into memory

  • Did you upgrade ejabberd, and that includes many changes? Compile and install it, then restart ejabberd completely

  • Do you need to stop a production ejabberd which has users connected? stop_kindly the server, informing users and rooms

  • Do you want to stop ejabberd gracefully? Then simply stop it

  • Do you need to stop ejabberd immediately, without worrying about the users? You can halt ejabberd abruptly

Now there is a new command, evacuate_kindly, useful when you need ejabberd running to perform some administrative task, but you don&apost want users connected while you perform those tasks.

It stops port listeners to prevent new client or server connections, informs users and rooms, and waits a few seconds or minutes, then restarts ejabberd. However, when ejabberd is started again, the port listeners are stopped: this allows to perform administrative tasks, for example in the database, without having to worry about users.

For example, assuming ejabberd is running and has users connected. First let&aposs evacuate all the users:

ejabberdctl evacuate_kindly 60 \"The server will stop in one minute.\"

Wait one minute, then ejabberd gets restarted with connections disabled.
Now you can perform any administrative tasks that you need.
Once everything is ready to accept user connections again, simply restart ejabberd:

ejabberdctl restart

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker...

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get support for Prometheus.

Prometheus support

Prometheus can now be used as a backend for mod_mon in addition to statsd, influxdb, influxdb2, datadog and dogstatsd.

You can expose all mod_mon metrics to Prometheus by adding a http listener pointing to mod_prometheus, for example:

  -
    port: 5280
    module: ejabberd_http
    request_handlers:
      "/metrics": mod_prometheus

You can then add a scrape config to Prometheus for ejabberd:

scrape_configs:
  - job_name: "ejabberd"
    static_configs:
      - targets:
          - "ejabberd.domain.com:5280"

You can also limit the metrics to a specific virtual host by adding it&aposs name to the path:

scrape_configs:
  - job_name: "ejabberd"
    static_configs:
      - targets:
          - "ejabberd.domain.com:5280"
     metrics_path: /metrics/myvhost.domain.com

Fix

  • PubSub: fix issue on get_item_name with p1db storage backend.

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Miscelanea

  • Elixir: support loading Elixir modules for auth (#4315)
  • Environment variables EJABBERD_MACRO to define macros
  • Fix problem starting ejabberd when first host uses SQL, other one mnesia
  • HTTP Websocket: Enable allow_unencrypted_sasl2 on websockets (#4323)
  • Relax checks for channels bindings for connections using external encryption
  • Redis: Add support for unix domain socket (#4318)
  • Redis: Use eredis 1.7.1 from Nordix when using mix/rebar3 and Erlang 21+
  • mod_auth_fast: New module with support XEP-0484: Fast Authentication Streamlining Tokens
  • mod_http_api: Fix crash when module not enabled (for example, in CT tests)
  • mod_http_api: New option default_version
  • mod_muc: Make rsm handling in disco items, correctly count skipped rooms
  • mod_offline: Only delete offline msgs when user has MAM enabled (#4287)
  • mod_priviled: Handle properly roster iq
  • mod_pubsub: Send notifications on PEP item retract
  • mod_s2s_bidi: Catch extra case in check for s2s bidi element
  • mod_scram_upgrade: Don&apost abort the upgrade
  • mod_shared_roster: The name of a new group is lowercased
  • mod_shared_roster: Get back support for groupid@vhost in displayed

Commands API

  • Change arguments and result to consistent names (API v3)
  • create_rooms_file: Improve to support vhosts with different config
  • evacuate_kindly: New command to kick users and prevent login (#4309)
  • join_cluster: Explain that this returns immediately (since 5a34020, 24.06)
  • mod_muc_admin: Rename argument name to room for consistency

Documentation

  • Fix some documentation syntax, add links to toplevel, modules and API
  • CONTAINER.md: Add kubernetes yaml examples to use with podman
  • SECURITY.md: Add security policy and reporting guidelines
  • ejabberd.service: Disable the systemd watchdog by default
  • ejabberd.yml.example: Use non-standard STUN port

WebAdmin

  • Shared group names are case sensitive, use original case instead of lowercase
  • Use lowercase username and server authentication credentials
  • Fix calculation of node&aposs uptime days
  • Fix link to displayed group when it is from another vhost

Full Changelog

https://github.com/processone/ejabberd/compare/24.10...24.12

ejabberd 24.12 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues.

by Jérôme Sautret at December 19, 2024 16:27

December 18, 2024

JMP

Newsletter: JMP at SeaGL, Cheogram now on Amazon

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

JMP at SeaGL

The Seattle GNU/Linux Conference (SeaGL) is happening next week and JMP will be there!  We’re going to have a booth with some of our employees, and will have JMP eSIM Adapters and USB card readers for purchase (if you prefer to save on shipping, or like to pay cash or otherwise), along with stickers and good conversations. :)  The exhibition area is open all day on Friday and Saturday, November 8 and 9, so be sure to stop by and say hi if you happen to be in the area.  We look forward to seeing you!

Cheogram Android in Amazon Appstore

We have just added Cheogram Android to the Amazon Appstore!  And we also added Cheogram Android to Aptoide earlier this month.  While F-Droid remains our preferred official source, we understand many people prefer to use stores that they’re used to, or that come with their device.  We also realize that many people have been waiting for Cheogram Android to return to the Play Store, and we wanted to provide this other option to pay for Cheogram Android while Google works out the approval process issues on their end to get us back in there.  We know a lot of you use and recommend app store purchases to support us, so let your friends know about this new Amazon Appstore option for Cheogram Android if they’re interested!

New features in Cheogram Android

As usual, we’ve added a bunch of new features to Cheogram Android over the past month or so.  Be sure to update to the latest version (2.17.2-1) to check them out!  (Note that Amazon doesn’t have this version quite yet, but it should be there shortly.)  Here are the notable changes since our last newsletter: privacy-respecting link previews (generated by sender), more familiar reactions, filtering of conversation list by account, nicer autocomplete for mentions and emoji, and fixes for Android 15, among many others.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

by Denver Gingerich at December 18, 2024 15:37

Newsletter: Year in Review, Google Play Update

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

As we approach the close of 2024, we want to take a moment to reflect on a year full of growth, innovation, and connection. Thanks to your support and engagement, JMP has continued to thrive as a service that empowers you to stay connected with the world using open standards and flexible technology. Here’s a look back at some of the highlights that made this year so special:

Cheogram Android

Cheogram Android, which we sponsor, experienced significant developments this year. Besides the preferred distribution channel of F-Droid, the app is also available on other platforms like Aptoide and the Amazon Appstore. It was removed from the Google Play Store in September for unknown reasons, and after a long negotiation has been restored to Google Play without modification.

Cheogram Android saw several exciting feature updates this year, including:

  • Major visual refresh
  • Animated custom emoji
  • Better Reactions UI (including custom emoji reactions)
  • Widgets powered by WebXDC for interactive chats and app extensions
  • Initial support for link previews
  • The addition of a navigation drawer to show chats from only one account or tag
  • Allowing edits to any message you have sent

This month also saw the release of 2.17.2-3 including:

  • Fix direct shares on Android 12+
  • Option to hide media from gallery
  • Do not re-notify dismissed notifications
  • Experimental extensions support based on WebXDC
  • Experimental XEP-0227 export support

Of course nothing in Cheogram Android would be possible without the hard work of the upstream project, Conversations, so thanks go out to the devs there as well.

eSIM Adapter Launch

This year, we introduced the JMP eSIM Adapter—a device that bridges the gap for devices without native eSIM support, and adds flexibility for devices with eSIM support. Whether you’re travelling, upgrading your device, or simply exploring new options, the eSIM Adapter makes it seamless to transfer eSIMs across your devices.

Engaging with the Community

This year, we hosted booths at SeaGL, FOSSY, and HOPE, connecting with all of you in person. These booths provided opportunities to learn about our services, pay for subscriptions, or purchase eSIM Adapters face-to-face.

Addressing Challenges

In 2024, we also tackled some pressing industry issues, such as SMS censorship. To help users avoid censorship and gain access to bigger MMS group chats, we’ve added new routes that you can request from our support team.

As part of this, we also rolled out the ability for JMP customers to receive calls directly over SIP.

Holiday Support Schedule

We want to inform you that JMP support will be reduced from our usual response level from December 23 until January 6. During this period, response times will be significantly longer than usual as our support staff take time with their families. We appreciate your understanding and patience.

Looking Ahead

As we move into 2025, we’re excited to keep building on this momentum. Expect even more features, improved services, and expanded opportunities to connect with the JMP community. Your feedback has been, and will always be, instrumental in shaping the future of JMP.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

by Stephen Paul Weber at December 18, 2024 15:36

December 13, 2024

Kaidan

Kaidan 0.10.1: Media Sharing and New Message Marker Fixes

This release fixes some bugs. Have a look at the changelog for more details.

Changelog

Bugfixes:

  • Fix displaying files of each message in appropriate message bubble (melvo)
  • Fix sending fallback messages for clients not supporting XEP-0447: Stateless file sharing (melvo)
  • Fix margins within message bubbles (melvo)
  • Fix hiding hidden message part (melvo)
  • Fix displaying marker for new messages (melvo)

Download

Or install Kaidan for your distribution:

Packaging status

December 13, 2024 23:00

December 10, 2024

Erlang Solutions

Meet the team: Erik Schön

In our final “Meet the Team” of 2024, we’d like to introduce you to Erik Schön, Managing Director at Erlang Solutions.

Erik shares his journey with Erlang, Elixir, and the BEAM ecosystem, from his work at Ericsson to joining Erlang Solutions in 2019. He also reflects on a key professional highlight in 2024 and looks ahead to his goals for 2025. Erik also reveals his festive traditions, including a Swedish-Japanese twist.

About Erik

So tell us about yourself and your role at Erlang Solutions.

Hello, I’m Erik! I’ve been a big fan of all things Erlang/Elixir/BEAM since the 90s, having seen many successful applications of it when working at Ericsson as an R&D manager for many years.

Since 2019, I’ve been part of the Erlang Solutions Nordic Fjällrävens (“Arctic Foxes”) team based in Stockholm, Sweden. I love helping our customers succeed by delivering faster, safer, and more efficient solutions.

What has been a professional highlight of yours in 2024?

The highlight of 2024 for me was our successful collaboration with BoardClic, a startup that helps its customers with digital board and C-suite level performance evaluations.

We started our collaboration with a comprehensive code-/architecture review of their Elixir codebase, using our 25 years of experience in delivering software for societal infrastructure, including all the do’s and don’ts for future-proof, secure, resilient, and scalable solutions.

Based on this, we boosted their development of new functionality for a strategically important customer—from idea to live, commercial operation. Two of our curious, competent collaborators, with 10+ years of practical, hands-on Elixir/Erlang/BEAM expertise, worked closely with BoardClic on-site to deliver on time and with quality.

What professional and personal achievements are you looking forward to achieving in 2025? 

Professionally, I look forward to continued success with our customers. This includes strengthening our long-standing partnerships with TV4, Telia, Ericsson, and Cisco. I’m also excited about the start of new partnerships, both inside and outside the BEAM community where we will continue to deliver more team-based, full-stack, end-to-end solutions.

Personally, I look forward to continuing to talk about my trilogy of books – The Art of Change, The Art of Leadership and The Art of Strategy – in podcasts, meetups and conferences.

Do you have any festive traditions that you’re looking forward to this holiday season?

In Sweden,julbord (a buffet-style table of small dishes including different kinds of marinated fish like herring and salmon, meatballs, ham, porridge, etc)  is a very important tradition to look forward to. Since my wife is from Japan, we always try to spice things up a bit by including suitable dishes from the Japanese kitchen, like different kinds of sushi.

Final thoughts

As we wrap up our 2024 meet-the-team series, a big thank you to Erik and all the incredible team members we’ve highlighted this year. Their passion, expertise, and dedication continue to drive our success.

Stay tuned for more insights and profiles in the new year as we introduce even more of the talented people who make Erlang Solutions what it is! if you’d like to speak more with our team, please get in touch.

The post Meet the team: Erik Schön appeared first on Erlang Solutions.

by Erlang Solutions Team at December 10, 2024 13:37

December 09, 2024

Kaidan

Kaidan 0.10.0: Too Much to Summarize!

Screenshot of Kaidan in widescreen Screenshot of Kaidan

We finally made it: Kaidan’s next release with so many features that we cannot summarize them in one sentence!

Most of the work has been funded by NLnet via NGI Assure and NGI Zero Entrust with public money provided by the European Commission. If you want Kaidan’s progress to continue and keep more free software projects alive, please share and sign the open letter for further funding!

Now to the bunch of Kaidan’s new and great features:

Group chats with invitations, user listing, participant mentioning and private/public group chat filtering are supported now. In order to use it, you need an XMPP provider that supports MIX-Core, MIX-PAM and MIX-Admin. Unfortunately, there are not many providers supporting it yet since it is a comparatively recent group chat variant.

You do not need to quote messages just to reply to them any longer. The messages are referenced internally without bloating the conversation. After clicking on a referenced message, Kaidan even jumps to it. In addition, Kaidan allows you to remove unwanted messages locally.

We added an overview of all shared media to quickly find the image you received some time ago. You can define when to download media automatically. Furthermore, connecting to the server is now really fast - no need to wait multiple seconds just to see your latest offline messages anymore.

If you enter a chat address (e.g., to add a contact), its server part is now autocompleted if available. We added filter options for contacts and group chats. After adding labels to them, you can even search by those labels. And if you do not want to get any messages from someone, you can block them.

In case you need to move to a new account (e.g., if you are dissatisfied with your current XMPP provider), Kaidan helps you with that. For example, it transfers your contacts and informs them about the move. The redesigned onboarding user interface including many fixes assists with choosing a new provider and creating an account on it.

We updated Kaidan to the API v2 of XMPP Providers to stay up-to-date with the project’s data. If you are an operator of a public XMPP provider and would like Kaidan’s users to easily create accounts on it, simply ask to add it to the provider list.

The complete list of changes can be found in the changelog section. There is also a technical overview of all currently supported features.

Please note that we currently focus on new features instead of supporting more systems. Once Kaidan has a reasonable feature set, we will work on that topic again. Even if Kaidan is making good progress, keep in mind that it is not yet a stable app.

Changelog

Features:

  • Add server address completion (fazevedo)
  • Allow to edit account’s profile (jbb)
  • Store and display delivery states of message reactions (melvo)
  • Send pending message reactions after going online (melvo)
  • Enable user to resend a message reaction if it previously failed (melvo)
  • Open contact addition as page (mobile) or dialog (desktop) (melvo)
  • Add option to open chat if contact exists on adding contact (melvo)
  • Use consistent page with search bar for searching its content (melvo)
  • Add local message removal (taibsu)
  • Allow reacting to own messages (melvo)
  • Add login option to chat (melvo)
  • Display day of the week or “yesterday” for last messages (taibsu, melvo)
  • Add media overview (fazevedo, melvo)
  • Add contact list filtering by account and labels (i.e., roster groups) (incl. addition/removal) (melvo, tech-bash)
  • Add message date sections to chat (melvo)
  • Add support for automatic media downloads (fazevedo)
  • Add filtering contacts by availability (melvo)
  • Add item to contact list on first received direct message (melvo)
  • Add support for blocking chat addresses (lnj)
  • Improve notes chat (chat with oneself) usage (melvo)
  • Place avatar above chat address and name in account/contact details on narrow window (melvo)
  • Reload camera device for QR code scanning as soon as it is plugged in / enabled (melvo)
  • Provide slider for QR code scanning to adjust camera zoom (melvo)
  • Add contact to contact list on receiving presence subscription request (melvo)
  • Add encryption key authentication via entering key IDs (melvo)
  • Improve connecting to server and authentication (XEP-0388: Extensible SASL Profile (SASL 2), XEP-0386: Bind 2, XEP-0484: Fast Authentication Streamlining Tokens, XEP-0368: SRV records for XMPP over TLS) (lnj)
  • Support media sharing with more clients even for sharing multiple files at once (XEP-0447: Stateless file sharing v0.3) (lnj)
  • Display and check media upload size limit (fazevedo)
  • Redesign message input field to use rounded corners and resized/symbolic buttons (melvo)
  • Add support for moving account data to another account, informing contacts and restoring settings for moved contacts (XEP-0283: Moved) (fazevedo)
  • Add group chat support with invitations, user listing, participant mentioning and private/public group chat filtering (XEP-0369: Mediated Information eXchange (MIX), XEP-0405: Mediated Information eXchange (MIX): Participant Server Requirements, XEP-0406: Mediated Information eXchange (MIX): MIX Administration, XEP-0407: Mediated Information eXchange (MIX): Miscellaneous Capabilities) (melvo)
  • Add button to cancel message correction (melvo)
  • Display marker for new messages (melvo)
  • Add enhanced account-wide and per contact notification settings depending on group chat mentions and presence (melvo)
  • Focus input fields appropriately (melvo)
  • Add support for replying to messages (XEP-0461: Message Replies) (melvo)
  • Indicate that Kaidan is busy during account deletion and group chat actions (melvo)
  • Hide account deletion button if In-Band Registration is not supported (melvo)
  • Embed login area in page for QR code scanning and page for web registration instead of opening start page (melvo)
  • Redesign onboarding user interface including new page for choosing provider to create account on (melvo)
  • Handle various corner cases that can occur during account creation (melvo)
  • Update to XMPP Providers v2 (melvo)
  • Hide voice message button if uploading is not supported (melvo)
  • Replace custom images for message delivery states with regular theme icons (melvo)
  • Free up message content space by hiding unneeded avatars and increasing maximum message bubble width (melvo)
  • Highlight draft message text to easily see what is not sent yet (melvo)
  • Store sent media in suitable directories with appropriate file extensions (melvo)
  • Allow sending media with less steps from recording to sending (melvo)
  • Add media to be sent in scrollable area above message input field (melvo)
  • Display original images (if available) as previews instead of their thumbnails (melvo)
  • Display high resolution thumbnails for locally stored videos as previews instead of their thumbnails (melvo)
  • Send smaller thumbnails (melvo)
  • Show camera status and reload camera once plugged in for taking pictures or recording videos (melvo)
  • Add zoom slider for taking pictures or recording videos (melvo)
  • Show overlay with description when files are dragged to be dropped on chats for being shared (melvo)
  • Show location previews on a map (melvo)
  • Open locations in user-defined way (system default, in-app, web) (melvo)
  • Delete media that is only captured for sending but not sent (melvo)
  • Add voice message recorder to message input field (melvo)
  • Add inline audio player (melvo)
  • Add context menu entry for opening directory of media files (melvo)
  • Show collapsible buttons to send media/locations inside of message input field (melvo)
  • Move button for adding hidden message part to new collapsible button area (melvo)

Bugfixes:

  • Fix index out of range error in message search (taibsu)
  • Fix updating last message information in contact list (melvo)
  • Fix multiple corrections of the same message (melvo, taibsu)
  • Request delivery receipts for pending messages (melvo)
  • Fix sorting roster items (melvo)
  • Fix displaying spoiler messages (melvo)
  • Fix displaying errors and encryption warnings for messages (melvo)
  • Fix fetching messages from server’s archive (melvo)
  • Fix various encryption problems (melvo)
  • Send delivery receipts for catched up messages (melvo)
  • Do not hide last message date if contact name is too long (melvo)
  • Fix displaying emojis (melvo)
  • Fix several OMEMO bugs (melvo)
  • Remove all locally stored data related to removed accounts (melvo)
  • Fix displaying media preview file names/sizes (melvo)
  • Fix disconnecting from server when application window is closed including timeout on connection problems (melvo)
  • Fix media/location sharing (melvo)
  • Fix handling emoji message reactions (melvo)
  • Fix moving pinned chats (fazevedo)
  • Fix drag and drop for files and pasting them (melvo)
  • Fix sending/displaying media in selected order (lnj, melvo)

Notes:

  • Kaidan is REUSE-compliant now
  • Kaidan requires Qt 5.15 and QXmpp 1.9 now

Download

Or install Kaidan for your distribution:

Packaging status

December 09, 2024 00:00

December 05, 2024

The XMPP Standards Foundation

The XMPP Newsletter November 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of November 2024.

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

XSF Announcements

XMPP Summit 27 & FOSDEM 2025

The XSF is planning the XMPP Summit 27, which is to take place on January 30th & 31st 2025 in Brussels (Belgium, Europe). Following the Summit, the XSF is also planning to be present at FOSDEM 2025, which takes place on February 1st & 2nd 2025. Find all the details in our Wiki. Please sign-up now if you are planning to attend, since this helps organizing. The event is of course open for everyone interested to participate. Spread the word within your circles!

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • Berlin XMPP Meetup (DE / EN): monthly meeting of XMPP enthusiasts in Berlin, every 2nd Wednesday of the month at 6pm local time
  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

XMPP Articles

XMPP Software News

XMPP Clients and Applications

  • Conversations has released versions 2.17.3 and 2.17.4 for Android.
  • Monocles Chat 2.0.2 has been released. This version brings MiniDNS, settings export, fixes and more!.
  • Monal has released version 6.4.6 for iOS an macOS.
  • Cheogram has released version 2.17.2-2 for Android. This release brings a chat requests feature to hide possible SPAM, with an option to report all. Additionally, it comes with several improvements and bugfixes. Also worth noting, since this last November, Cheogram is once again available for download from the Google Play Store!
Cheogram 2.17.2-2 navigation drawer with account and tag filters and SPAM control, featuring the option report all.

Cheogram 2.17.2-2 navigation drawer with account and tag filters and SPAM control, featuring the option report all.

XMPP Servers

  • Openfire 4.9.1 and 4.9.2 have been released. Version 4.9.1 is a bugfix and maintenance release, whereas version 4.9.2 is a bugfix release. You can read the full changelog for more details.
  • MongooseIM version 6.3.0 has been released. The main highlight is the complete instrumentation rework, allowing integration with Prometheus. Additionally, CockroachDB has been added to the list of supported databases for increased scalability. See the release notes for more information.
  • The (non-official) Prosody app for Yunohost has now reached a beta maturity opening it for everybody to test. This variant aims at providing better XMPP support for Yunohost users. In comparison to official Metronome and Prosody apps, this app enables A/V calls working out of the box. An optional import of rosters, MUCs, and bookmarks from Metronome is also provided. As a reminder, Yunohost is a server distribution based on Debian, which makes it easy to host a lot of services (apps) by yourself. Till the last major release (version 12), Metronome was integrated in the core installation, allowing a lot of people to discover XMPP easier (though with some limitations).

XMPP Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs Proposed this month.

New

  • Version 0.1.0 of XEP-0496 (Pubsub Node Relationships)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0497 (Pubsub Extended Subscriptions)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0498 (Pubsub File Sharing)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0499 (Pubsub Extended Discovery)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0500 (MUC Slow Mode)
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 1.0.1 of XEP-0490 (Message Displayed Synchronization)
    • Fix some examples, and their indentation.
    • Add the XML Schema. (egp)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable

  • Version 1.0.0 of XEP-0490 (Message Displayed Synchronization)
    • Accept as Stable as per Council Vote from 2024-11-05. (XEP Editor (dg)

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Schimon Zachary, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: Gonzalo Raúl Nemmi
  • German: xmpp.org
    • Translators: Millesimus

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

To unsubscribe from this list, please log in first. If you have not previously logged in, you may need to set up an account with the appropriate email address.

License

This newsletter is published under CC BY-SA license.

December 05, 2024 00:00

December 04, 2024

Erlang Solutions

Advent of Code 2024

Welcome to Advent of Code 2024!

Like every year, I start the challenge with the best attitude and love of being an Elixir programmer. Although I know that at some point, I will go to the “what is this? I hate it” phase, unlike other years, this time, I am committed to finishing Advent of Code and, more importantly, sharing it with you.

I hope you enjoy this series of December posts, where we will discuss the approach for each exercise. But remember that it is not the only one, and the idea of ​​this initiative is to have a great time and share knowledge, so don’t forget to post your solutions and comments and tag us to continue the conversation.

Let’s go for it!

Day 1: Historian Hysteria

Before starting any exercise, I suggest spending some time defining the structure that best fits the problem’s needs. If the structure is adequate, it will be easy to reuse it for the second part without further complications.

In this case, the exercise itself describes lists as the input, so we can skip that step and instead consider which functions of the Enum or List modules can be helpful.

We have this example input:

3   4

4   3

2   5     

1   3   

3   9

3   3

The goal is to transform it into two separate lists and apply sorting, comparison, etc.

List 1: [3, 4, 2, 1, 3, 3 ] 

List 2: [4, 3, 5, 3, 9, 3]

Let’s define a function that reads a file with the input. Each line will initially be represented by a string, so use String.split to separate it at each line break. 

 def get_input(path) do
   path
   |> File.read!()
   |> String.split("\n", trim: true)
 end


["3   4", "4   3", "2   5", "1   3", "3   9", "3   3"]

We will still have each row represented by a string, but we can now modify this using the functions in the Enum module. Notice that the whitespace between characters is constant, and the pattern is that the first element should go into list one and the second element into list two. Use Enum.reduce to map the elements to the corresponding list and get the following output:


%{
 first_list: [3, 3, 1, 2, 4, 3],
 second_list: [3, 9, 3, 5, 3, 4]
}

I’m using a map so that we can identify the lists and everything is clear. The function that creates them is as follows:

 @doc """
 This function takes a list where the elements are strings with two
 components separated by whitespace.


 Example: "3   4"


 It assigns the first element to list one and the second to list two,
 assuming both are numbers.
 """
 def define_separated_lists(input) do
   Enum.reduce(input, %{first_list: [], second_list: []}, fn row, map_with_lists ->
     [elem_first_list, elem_second_list] = String.split(row, "   ")


     %{
       first_list: [String.to_integer(elem_first_list) | map_with_lists.first_list],
       second_list: [String.to_integer(elem_second_list) | map_with_lists.second_list]
     }
   end)
 end

Once we have this format, we can move on to the first part of the exercise.

Part 1

Use Enum.sort to sort the lists ascendingly and pass them to the Enum.zip_with function that will calculate the distance between the elements of both. Note that we are using abs to avoid negative values, and finally, Enum.reduce to sum all the distances.

first_sorted_list = Enum.sort(first_list)
   second_sorted_list = Enum.sort(second_list)


   first_sorted_list
   |> Enum.zip_with(second_sorted_list, fn x, y -> abs(x-y) end)
   |> Enum.reduce(0, fn distance, acc -> distance + acc end)

Part 2

For the second part, you don’t need to sort the lists; use Enum.frequencies and Enum.reduce to get the multiplication of the elements.

 frequencies_second_list = Enum.frequencies(second_list)


   Enum.reduce(first_list, 0, fn elem, acc ->
     elem * Map.get(frequencies_second_list, elem, 0) + acc
   end)

That’s it. As you can see, once we have a good structure, the corresponding module, in this case, Enum, makes the operations more straightforward, so it’s worth spending some time defining which input will make our life easier.

You can see the full version of the exercise here.

Day 2: Red-Nosed Reports

The initial function receives a path corresponding to the text file with the input and reads the strings, separating them by newlines. Inside this function, we will also convert each string to a list of integers, using the Enum functions.

def get_input(path) do
   path
   |> File.read!()
   |> String.split("\n", trim: true)
   |> Enum.map(&convert_string_to_int_list(&1))
 end

With this example: 

7 6 4 2 1

1 2 7 8 9

9 7 6 2 1

1 3 2 4 5

8 6 4 4 1

1 3 6 7 9

Our output will look like this:

[
   [7, 6, 4, 2, 1],
   [1, 2, 7, 8, 9],
   [9, 7, 6, 2, 1],
   [1, 3, 2, 4, 5],
   [8, 6, 4, 4, 1],
   [1, 3, 6, 7, 9]
]

We already have a format that allows us to compare integers and validate each report individually. So, let’s do that.

Part 1

For a report to be valid, the following conditions must be met:

  • The levels are either all increasing or all decreasing.
  • Any two adjacent levels differ by at least one and at most three.

We will use Enum.sort and Enum.filter to get those lists that are sorted either ascending or descending.

Enum.filter(levels, &(is_ascending?(&1) || is_descending?(&1)))

A list is sorted ascending if it matches with Enum.sort(list)A list is sorted descending it if matches with Enum.sort(list, :desc)

defp is_ascending?(list), do: Enum.sort(list) == list
defp is_descending?(list), do: Enum.sort(list, :desc) == list

Once we have the ordered lists, we will now filter those that meet the condition that the distance between their elements is >= 1 and <= 3.

Enum.filter(levels, &valid_levels_distance?(&1, is_valid))

The valid_levels_distance function is a recursive function that iterates over the elements of the list and if it meets the condition it returns true, otherwise, it returns false. In the end, we will have the lists that meet both conditions and only need to count their elements.

path
|> get_input()
|> get_sorted_level_lists()
|> get_valid_adjacent_levels()
|> Enum.count()

Part 2

For this second part, I used a function that wraps the validations. In the previous exercise, each step is separated but here I will define the function is_a_valid_list?

defp is_a_valid_list?(list),
   do: (is_ascending?(list) || is_descending?(list)) && valid_levels_distance?(list, false)

If the list is invalid, the following function will remove each level and check if the conditions are met with this operation.

@spec remove_level_and_validate_list(list(), integer) :: list()
 def remove_level_and_validate_list(level, index) when index == length(level), do: []


 def remove_level_and_validate_list(level, index) do
   new_list = List.delete_at(level, index)


   if is_a_valid_list?(new_list) do
     new_list
   else
     remove_level_and_validate_list(level, index + 1)
   end
 end

With this, we will have all the valid lists, whether original or modified, and the last step will be to count their elements.

 path
   |> get_input()
   |> get_valid_lists_with_bad_levels()
   |> Enum.count()

I like to use recursive functions for these kinds of exercises because it’s an explicit way to check what happens at each step. But remember that we can also take advantage of Enum and have more compact code. Let me know which approach you prefer.

You can check the full version here.

Day 3: Mull It Over

Let’s start by defining a function to read a text file with the input, a simple File.read!(path). Now, according to the description, the problem screams Regular Expressions, my favorite thing in the world…

Fortunately, the Regex module provides us with everything we need, so we only have to focus on defining the correct patterns.

Spoiler: The second part of the exercise could also be solved with regular expressions, but I’ve taken a different approach, I’ll get to that.

Part 1

Our input is a string, so we can use Regex.scan to get all occurrences of mul(x,y) where x and y are integers, that is, they are made up of one or more digits. Therefore, the \d option will help us obtain them, specifying that they can be one or more digits.

This expression is enough:

~r/mul\((\d+),(\d+)\)/

The function looks like this:

def get_valid_mul_instructions(section) do
   regex_valid_multi = ~r/mul\((\d+),(\d+)\)/
   Regex.scan(regex_valid_multi, section, capture: :all_but_first)
 end

I’m taking advantage of the capture options for a slightly cleaner format, with capture: :all_but_first we directly have a list with the elements we need, for example for mul(2,4) the result would be [“2″, ” 4″].  

[["2", "4"], ["5", "5"], ["11", "8"], ["8", "5"]]

In the end, we will have a list like the following, which we can process to convert the elements into integers, multiply them, and add everything together. I used Enum.reduce.

Enum.reduce(correct_instructions, 0, fn [x, y] = _mul, acc ->
     String.to_integer(x) * String.to_integer(y) + acc
   end)

Part 2

Ah! I almost gave up on this part.

My initial idea was to define a regular expression that would replace the don’t()…do() pattern with any string, like “INVALID,” for example. That way, we would have input without the invalid blocks, and we could reuse all the code from the first section.

After a thousand failed attempts and remembering why I hate regular expressions, I completely changed the approach to use String.split. When it also failed, I realized that at some point I changed the original input, and I was never going to get the correct result… anyway. That’s why the final version ended up being much longer than I would have liked, but I invite you to try regular expressions first and take advantage of Regex to solve this second part smoothly.

In this case, the approach was to use String.split to separate the blocks every time I encountered a don’t() or do() and have a list to iterate through.

def remove_invalid_blocks(section) do
   regex = ~r/(don't[(][)]|do[(][)])/
   String.split(section, regex, include_captures: true)
 end

Something like this:

[
 "xmul(2,4)&mul[3,7]!^",
 "don't()",
 "_mul(5,5)+mul(32,64](mul(11,8)un",
 "do()",
 "?mul(8,5))",
 "don't()",
 "mul(2,3)"
]

We can add conditions so that everything between a don’t() and do() block is discarded. Once we have an input without these parts, we can apply the same procedure we used for part one.

The code ends up looking like this:

 path
   |> get_input()
   |> remove_invalid_blocks()
   |> get_valid_mul_instructions()
   |> sum_multiplication_results()

You can check the full version here.

Day 4: Ceres Search

Ah, this is one of those problems where the structure we define to work with can make things easier or more complicated. We need a matrix representation.

We could use lists to simulate the arrays of other programming languages; however, let’s consider how we will iterate to obtain the elements around a specific item.

It’s easier to have a shortcut indicating the coordinates, something like: item(3,4). With lists, we have to do a bit more manipulation, so I’ll use a map.

The idea is to transform the entry into a map that allows us constant access:

%{
 {0, 0} => "M", {0, 1} => "M", {0, 2} => "M",
 {1, 0} => "M", {1, 1} => "S", {1, 2} => "A",
 {2, 0} => "A", {2, 1} => "M", {2, 2} => "X",
}


Let’s define a function to read a text file with the input and transform each string into coordinates. For this, I will use Enum.with_index.

path
   |> File.read!()
   |> String.split("\n", trim: true)
   |> Enum.with_index(fn element, index ->        get_coordinate_format({element, index}) end)

Part 1

Once we have the expected format, then we convert the word we will search for into a list as well.

word = String.graphemes("XMAS")

Now, we filter with our input the positions where the “X” appears. This way, we will save ourselves from checking element by element and only validate those that may be the beginning of the word.

Enum.filter(input, fn {_coordinate, value} -> value == character end)


character = “M”

The output will be something like this:

 [
   {{0, 4}, "X"},
   {{9, 5}, "X"},
   {{8, 5}, "X"}
 ]

Where the format corresponds to {row, column}. And now, we will work on this list. Considering that for a coordinate {x, y} the adjacent positions are the following:

[
   {row, colum + 1},
   {row, colum - 1},
   {row + 1, colum},
   {row - 1, colum},
   {row - 1, colum + 1},
   {row - 1, colum - 1},
   {row + 1, colum + 1},
   {row + 1, colum - 1}
 ]

We will iterate in that direction by comparing with the elements. That is, if the position {x, y} = “X” and the coordinate {x, y + 1} = “M” then we will move {x, y + 1} until one of the characters is different from our word.
If we complete the word then we add 1 to our counter (Enum.reduce).

Enum.reduce(coordinates, 0, fn {coord, _elem}, occurrences ->
     check_coordinate(coord, word, input) + occurrences
   end)

Part 2

For part two use the same approach of looking for the coordinates corresponding to the character “A” to work only with those that have a chance of being what we need.

Enum.filter(input, fn {_coordinate, value} -> value == character end)


character = “A”

And for the comparison of the elements around it I used brute force haha, since there are only 4 possible combinations, I decided to add them directly.

Enum.reduce(coordinates, 0, fn {coordinate, _elem}, acc ->

     valid_x_mas(coordinate, input) + acc

   end)

def valid_x_mas({row, column} = _coordinate, input) do
   top_left = input[{row - 1, column - 1}]
   top_right = input[{row - 1, column + 1}]
   bottom_left = input[{row + 1, column - 1}]
   bottom_right = input[{row + 1, column + 1}]


   cond do
     top_left == "M" && bottom_left == "M" && top_right == "S" && bottom_right == "S" -> 1
     top_left == "M" && bottom_left == "S" && top_right == "M" && bottom_right == "S" -> 1
     top_left == "S" && bottom_left == "S" && top_right == "M" && bottom_right == "M" -> 1
     top_left == "S" && bottom_left == "M" && top_right == "S" && bottom_right == "M" -> 1
     true -> 0
   end
 end

You can check the full version here.

The post Advent of Code 2024 appeared first on Erlang Solutions.

by Lorena Mireles at December 04, 2024 08:12

November 30, 2024

Madhur Garg

Jaipur

The perfect 3 days Jaipur itinerary - Day 1: Aesthetic fort vibes Morning: Nahargarh Fort: Begin with stunning views of Jaipur city. Explore the fort’s intricate architecture and serene ambiance. Gaitor Ki Chhatriyan: Visit these beautiful royal cenotaphs for a glimpse into Jaipur’s regal history. Afternoon: Stop by Jal Mahal: Take 10-20 minutes to admire this palace...

November 30, 2024 00:00

November 29, 2024

Erlang Solutions

Optimising for Concurrency: Comparing and contrasting the BEAM and JVM virtual machines

The success of any programming language in the Erlang ecosystem can be apportioned into three tightly coupled components. They are the semantics of the Erlang programming language, (on top of which other languages are implemented), the OTP libraries and middleware (used to architect scalable and resilient concurrent systems) and the BEAM Virtual Machine tightly coupled to the language semantics and OTP.

Take any of these components on their own, and you have a runner-up. But, put the three together, and you have the uncontested winner for scalable, resilient soft-real real-time systems. To quote Joe Armstrong, “You can copy the Erlang libraries, but if it does not run on BEAM, you can’t emulate the semantics”. This is enforced by Robert Virding’s First Rule of Programming, which states that “Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.”

In this post, we want to explore the BEAM VM internals. We will compare and contrast them with the JVM where applicable, highlighting why you should pay attention to them and care. For too long, this component has been treated as a black box and taken for granted, without understanding the reasons or implications. It is time to change that!

Highlights of the BEAM

Erlang and the BEAM VM were invented to be the right tools to solve a specific problem. Ericsson developed them to help implement telecom infrastructure, handling both mobile and fixed networks. This infrastructure is highly concurrent and scalable in nature. It has to display soft real-time properties and may never fail. We don’t want our phone calls dropped or our online gaming experience to be affected by system upgrades, high user load or software, hardware and network outages. The BEAM VM solves these challenges using a state-of-the-art concurrent programming model. It features lightweight BEAM processes which don’t share memory, are managed by the schedulers of the BEAM which can manage millions of them across multiple cores, and garbage collectors running on a per-process basis, highly optimised to reduce any impact on other processes. The BEAM is also the only widely used VM used at scale with a built-in distribution model which allows a program to run on multiple machines transparently.

The BEAM VM supports zero-downtime upgrades with hot code replacement, a way to modify application code at runtime. It is probably the most cited unique feature of the BEAM. Hot code loading means that the application logic can be updated by changing the runnable code in the system whilst retaining the internal process state. This is achieved by replacing the loaded BEAM files and instructing the VM to replace the references of the code in the running processes.

It is a crucial feature for no downtime code upgrades for telecom infrastructure, where redundant hardware was put to use to handle spikes. Nowadays, in the era of containerisation, other techniques are also used for production updates. Those who have never used it dismiss it as a less important feature, but it is nonetheless useful in the development workflow. Developers can iterate faster by replacing part of their code without having to restart the system to test it. Even if the application is not designed to be upgradable in production, this can reduce the time needed for recompilation and redeployments.

Highlights of the JVM

The Java Virtual Machine (JVM) was invented by Sun Microsystem with the intent to provide a platform for ‘write once’ code that runs everywhere. They created an object-oriented language similar to C++, but memory-safe because its runtime error detection checks array bounds and pointer dereferences. The JVM ecosystem became extremely popular in the Internet era, making it the de-facto standard for enterprise server applications. The wide range of applicability was enabled by a virtual machine that caters for many use cases, and an impressive set of libraries supporting enterprise development.

The JVM was designed with efficiency in mind. Most of its concepts are abstractions of features found in popular operating systems such as the threading model which maps the VM threads to operating system threads. The JVM is highly customisable, including the garbage collector (GC) and class loaders. Some state-of-the-art GC implementations provide highly tunable features catering for a programming model based on shared memory. And, the JIT (Just-in-time) compiler automatically compiles bytecode to native machine code with the intent to speed up parts of the application.

The JVM allows you to change the code while the program is running. It is a very useful feature for debugging purposes, but production use of this feature is not recommended due to serious limitations.

Concurrency and Parallelism

We talk about parallel code execution if parts of the code are run at the same time on multiple cores, processors or computers, while concurrent programming refers to handling events arriving at the system independently. Concurrent execution can be simulated on single-core hardware, while parallel execution cannot. Although this distinction may seem pedantic, the difference results in some very different problems to solve. Think of many cooks making a plate of carbonara pasta. In the parallel approach, the tasks are split across the number of cooks available, and a single portion would be completed as quickly as it took these cooks to complete their specific tasks. In a concurrent world, you would get a portion for every cook, where each cook does all of the tasks. You use parallelism for speed and concurrency for scale.

Parallel execution tries to decompose the problem into parts that are independent of each other. Boil the water, get the pasta, mix the egg, fry the guanciale ham, and grate the pecorino cheese1. The shared data (or in our example, the serving dish) is handled by locks, mutexes and various other techniques to guarantee correctness. Another way to look at this is that the data (or ingredients) are present, and we want to utilise as many parallel CPU resources as possible to finish the job as quickly as possible.

Concurrent programming, on the other hand, deals with many events that arrive at the system at different times and tries to process all of them within a reasonable timeframe. On multi-core or distributed architectures, some of the processing may run in parallel. Another way to look at it is that the same cook boils the water, gets the pasta, mixes the eggs and so on, following a sequential algorithm which is always the same. What changes across processes (or cooks) is the data (or ingredients) to work on, which exist in multiple instances.

In summary, concurrency and parallelism are two intrinsically different problems, requiring different solutions.

Concurrency the Java way

In Java, concurrent execution is implemented using VM threads. Before the latest developments, only one threading model, called Platform Threads existed. As it is a thin abstraction layer above operating system threads, Platform Threads are scheduled in a rather simple, priority-based way, leaving most of the work to the underlying operating system. With Java 21, a new threading model was introduced, the Virtual Threads. This new model is very similar to BEAM processes since virtual threads are scheduled by the JVM, providing better performance in applications where thread contention is not negligible. Scheduling works by mounting a virtual thread to the carrier (OS) thread and unmounting it when the state of the virtual thread becomes blocked, and replacing it with a new virtual thread from the pool.

Since Java promotes the use of shared data structures, both threading models suffer from performance bottlenecks caused by synchronisation-related issues like frequent CPU cache invalidation and locking errors. Also, programming with concurrency primitives is a difficult task because of the challenges created by the shared memory model. To overcome these difficulties, there are attempts to simplify and unify the concurrent programming models, with the most successful attempt being the Akka framework. Unfortunately, it is not widely used, limiting its usefulness as a unified concurrency model, even for enterprise-grade applications. While Akka does a great job at replicating the higher-level constructs, it is somewhat limited by the lack of primitives provided by the JVM, allowing it to be highly optimised for concurrency. While the primitives of the JVM enable a wider range of use cases, they make programming distributed systems harder as they have no built-in primitives for communication and are often based on a shared memory model. For example, wherein a distributed system do you place your shared memory? And what is the cost of accessing it?

Garbage Collection

Garbage collection is a critical task for most of the applications, but applications may have very different performance requirements. Since the JVM is designed to be a ubiquitous platform, it is evident that there is no one-size-fits-all solution. There are garbage collectors designed for resource-limited environments such as embedded devices, and also for resource-intensive, highly concurrent or even real-time applications. The JVM GC interface makes it possible to use 3rd party collectors as well.

Due to the Java Memory Model, concurrent garbage collection is a hard task. The JVM needs to keep track of the memory areas that are shared between multiple threads, the access patterns to the shared memory, thread states, locks and so on. Because of shared memory, collections affect multiple threads simultaneously, making it difficult to predict the performance impact of GC operations. So difficult, that there is an entire industry built to provide tools and expertise for GC optimisation.

The BEAM and Concurrency

Some say that the JVM is built for parallelism, the BEAM for concurrency. While this might be an oversimplification, its concurrency model makes the BEAM more performant in cases where thousands or even millions of concurrent tasks should be processed in a reasonable timeframe.

The BEAM provides lightweight processes to give context to the running code. BEAM processes are different from operating system processes, but they share many concepts. BEAM processes, also called actors, don’t share memory, but communicate through message passing, copying data from one process to another. Message passing is a feature that the virtual machine implements through mailboxes owned by individual processes. It is a non-blocking operation, which means that sending a message to another process is almost instantaneous and the execution of the sender is not blocked during the operation. The messages sent are in the form of immutable data, copied from the stack of the sending process to the mailbox of the receiving one. There are no shared data structures, so this can be achieved without the need for locks and mutexes among the communicating processes, only a lock on the mailbox in case multiple processes send a message to the same recipient in parallel.

Immutable data and message passing together enable the programmer to write processes which work independently of each other and focus on functionality instead of the low-level management of the memory and scheduling of tasks. It turns out that this simple design is effective on both single thread and multiple threads on a local machine running in the same VM and, using the inter-VM communication facilities of the BEAM, across the network and machines running the BEAM. Because the messages are immutable between processes, they can be scheduled to run on another OS thread (or machine) without locking, providing almost linear scaling on distributed, multi-core architectures. The processes are handled in the same way on a local VM as in a cluster of VMs, message sending works transparently regardless of the location of the receiving process.

Processes do not share memory, allowing data replication for resilience and distribution for scale. Having two instances of the same process on a single or more separate machine, state updates can be shared with each other. If one of the processes or machines fails, the other has an up-to-date copy of the data and can continue handling requests without interruption, making the system fault-tolerant. If more than one machine is operational, all the processes can handle requests, giving you scalability. The BEAM provides highly optimised primitives for all of this to work seamlessly, while OTP (the “standard library”) provides the higher level constructs to make the life of the programmers easy.

Scheduler

We mentioned that one of the strongest features of the BEAM is the ability to run concurrent tasks in lightweight processes. Managing these processes is the task of the scheduler.

The scheduler starts, by default, an OS thread for every core and optimises the workload between them. Each process consists of code to be executed and a state which changes over time. The scheduler picks the first process in the run queue that is ready to run, and gives it a certain amount of reductions to execute, where each reduction is the rough equivalent of a BEAM command. Once the process has either run out of reductions, is blocked by I/O, is waiting for a message, or is completed executing its code, the scheduler picks the next process from the run queue and dispatches it. This scheduling technique is called pre-emptive.

We have mentioned the Akka framework many times. Its biggest drawback is the need to annotate the code with scheduling points, as the scheduling is not done at the JVM level. By removing the control from the hands of the programmer, soft real-time properties are preserved and guaranteed, as there is no risk of them accidentally causing process starvation.

The processes can be spread around the available scheduler threads to maximise CPU utilisation. There are many ways to tweak the scheduler but it is rarely needed, only for edge cases, as the default configuration covers most usage patterns.

There is a sensitive topic that frequently pops up regarding schedulers: how to handle Natively Implemented Functions (NIFs). A NIF is a code snippet written in C, compiled as a library and run in the same memory space as the BEAM for speed. The problem with NIFs is that they are not pre-emptive, and can affect the schedulers. In recent BEAM versions, a new feature, dirty schedulers, was added to give better control for NIFs. Dirty schedulers are separate schedulers that run in different threads to minimise the interruption a NIF can cause in a system. The word dirty refers to the nature of the code that is run by these schedulers.

Garbage Collector

Modern, high-level programming languages today mostly use a garbage collector for memory management. The BEAM languages are no exception. Trusting the virtual machine to handle the resources and manage the memory is very handy when you want to write high-level concurrent code, as it simplifies the task. The underlying implementation of the garbage collector is fairly straightforward and efficient, thanks to the memory model based on an immutable state. Data is copied, not mutated and the fact that processes do not share memory removes any process inter-dependencies, which, as a result, do not need to be managed.

Another feature of the BEAM is that garbage collection is run only when needed, on a per-process basis, without affecting other processes waiting in the run queue. As a result, the garbage collection in Erlang does not ‘stop the world’. It prevents processing latency spikes because the VM is never stopped as a whole – only specific processes are, and never all of them at the same time. In practice, it is just part of what a process does and is treated as another reduction. The garbage collector collecting process suspends the process for a very short interval, often microseconds. As a result, there will be many small bursts, triggered only when the process needs more memory. A single process usually doesn’t allocate large amounts of memory, and is often short-lived, further reducing the impact by immediately freeing up all its allocated memory on termination.

More about features

The features of the garbage collector are discussed in an excellent blog post by Lukas Larsson. There are many intricate details, but it is optimised to handle immutable data in an efficient way, dividing the data between the stack and the heap for each process. The best approach is to do the majority of the work in short-lived processes.

A question that often comes up on this topic is how much memory the BEAM uses. Under the hood, the VM allocates big chunks of memory and uses custom allocators to store the data efficiently and minimise the overhead of system calls. 

This has two visible effects: The used memory decreases gradually after the space is not needed, and reallocating huge amounts of data might mean doubling the current working memory. The first effect can, if necessary, be mitigated by tweaking the allocator strategies. The second one is easy to monitor and plan for if you have visibility of the different types of memory usage. (One such monitoring tool that provides system metrics that are out of the box is WombatOAM).

JVM vs BEAM concurrency

As mentioned before, the JVM and the BEAM handle concurrent tasks very differently. Under high load, shared resources become bottlenecks. In a Java application, we usually can’t avoid that. That’s why the BEAM is superior in these kinds of applications. While memory copy has a certain cost, the performance impact caused by the synchronised access to shared resources is much higher. We performed many tests to measure this impact.

JVM and the  BEAM

This chart nicely displays the large differences in performance between the JVM and the BEAM. In this test, the applications were implemented in Elixir and Java. The Elixir code compiles to the BEAM, while the Java code, evidently, compiles to the JVM.

When not to use the BEAM

It is very much about the right tool for the job. Do you need a system to be extremely fast, but are not concerned about concurrency? Handling a few events in parallel, and having to handle them fast? Need to crunch numbers for graphics, AI or analytics? Go down the C++, Python or Java route. Telecom infrastructure does not need fast operations on floats, so speed was never a priority. Aided with dynamic typing, which has to do all type checks at runtime means compile-time optimizations are not as trivial. So number crunching is best left to the JVM, Go or other languages which compile to native code. It is no surprise that floating point operations on Erjang, the version of Erlang running on the JVM, was 5000% faster than the BEAM. But where we’ve seen the BEAM shine is using its concurrency to orchestrate number crunching, outsourcing the analytics to C, Julia, Python or Rust. You do the map outside the BEAM and the reduction within the BEAM.

The mantra has always been fast enough. It takes a few hundred milliseconds for humans to perceive a stimulus (an event) and process it in their brains, meaning that micro or nano-second response time is not necessary for many applications. Nor would you use the BEAM for microcontrollers, it is too resource-hungry. But for embedded systems with a bit more processing power, where multi-core is becoming the norm and you need concurrency, the BEAM shines. Back in the 90s, we were implementing telephony switches handling tens of thousands of subscribers running in embedded boards with 16 MB of memory. How much memory does a Raspberry Pi have these days? And finally, hard real-time systems. You would probably not want the BEAM to manage your airbag control system. You need hard guarantees, something only a hard real-time OS and a language with no garbage collection or exceptions. An implementation of an Erlang VM running on bare metal such as GRiSP will give you similar guarantees.

Conclusion

Use the right tool for the job. If you are writing a soft real-time system which has to scale out of the box, should never fail, and do so without the hassle of having to reinvent the wheel, the BEAM is the battle-proven technology you are looking for.

For many, it works as a black box. Not knowing how it works would be analogous to driving a Ferrari and not being capable of achieving optimal performance or not understanding what part of the motor that strange sound is coming from. This is why you should learn more about the BEAM, understand its internals and be ready to fine-tune and fix it.

For those who have used Erlang and Elixir in anger, we have launched a one-day instructor-led course which will demystify and explain a lot of what you saw whilst preparing you to handle massive concurrency at scale. The course is available through our new instructor-led lead remote training, learn more here. We also recommend The BEAM book by Erik Stenman and the BEAM Wisdoms, a collection of articles by Dmytro Lytovchenko.

If you’d like to speak to a member of the team, feel free to drop us a message.

The post Optimising for Concurrency: Comparing and contrasting the BEAM and JVM virtual machines appeared first on Erlang Solutions.

by Attila Sragli at November 29, 2024 09:54

November 21, 2024

Ignite Realtime Blog

Florian, Dan and Dave Elected in the XSF!

In an annual vote, not one, not two, but three Ignite Realtime community members have been selected into leadership positions of the XMPP Standards Foundation! :partying_face:

The XMPP Standards Foundation is an independent, nonprofit standards development organisation whose primary mission is to define open protocols for presence, instant messaging, and real-time communication and collaboration on top of the IETF’s Extensible Messaging and Presence Protocol (XMPP). Most of the projects that we’re maintaining in the Ignite Realtime community have a strong dependency on XMPP.

The XSF Board of Directors, in which both @Flow and @dwd are elected, oversees the business affairs of the organisation. They are now in a position to make key decisions on the direction of XMPP technology and standards development, manage resources and partnerships to further the growth of the XMPP ecosystem and promote XMPP in the larger open-source and communications community, advocating for its adoption and use in various applications.

The XMPP Council, in which @danc has been reelected, is the technical steering group that approves XMPP Extension Protocols. The Council is responsible for standards development and process management. With that, Dan is now on the forefront of new developments within the XMPP community!

Congrats to you all, Dan, Dave and Florian!

For other release announcements and news follow us on Mastodon or X

2 posts - 2 participants

Read full topic

by guus at November 21, 2024 22:19

The XMPP Standards Foundation

2024 Annual Meeting and Voting Results

Every year the members of the XSF get together to vote on the current quarter’s new and renewing members. Additionally, elections for both XMPP Council and Board of Directors have been held.

This year’s election meeting was held on November 21st, 2024 and voting results can be found in the XSF Wiki.

The 2024/2025 term will be formed by the following members:

  • XMPP Council
    • Dan Caseley
    • Daniel Gultsch
    • Jérôme Poisson
    • Stephen Paul Weber
    • Marvin Wißfeld
  • Board of Directors
    • Edward Maurer
    • Ralph Meijer
    • Florian Schmaus
    • Dave Cridland
    • Arne-Bruen Vogelsang

Please congratulate them if you run across any of those listed here, but also please help us make this another great year for the XSF.

November 21, 2024 00:00

November 15, 2024

The XMPP Standards Foundation

MongooseIM 6.3 - Monitor with Prometheus, scale up with CockroachDB

MongooseIM is a scalable, efficient, high-performance instant messaging server. At Erlang Solutions, we believe that it is essential to use the right tool for the job, and this is why the server implements the proven, open, and extensible XMPP protocol, which was designed for instant messaging from the beginnning. Thanks to the inherent flexibility of XMPP, MongooseIM is very versatile and has a variety of applications. Being specified in RFC and XEP documents, the protocol ensures compatibility with other software as well, including multiple clients and libraries. Similarly to the protocol, we have chosen the Erlang programming language, because it was designed with the intention of handling large numbers of parallel connections - which is the exact case in a messaging server.

With each new version, we introduce new features and improvements. For example, version 6.2.0 introduced our new CETS in-memory storage, making setup and autoscaling in cloud environments easier than before (see the blog post for details). The latest release 6.3.0 is no exception. The main highlight is the complete instrumentation rework, allowing seamless integration with modern monitoring solutions like Prometheus. Additionally, we have added CockroachDB to the list of supported databases, so you can now let this highly scalable database grow with your applications while avoiding being locked into your cloud provider.

Observability and instrumentation

In software engineering, observability is the ability to gather data from a running system in order to figure out what is going inside: is it working as expected? Does it have any issues? How much load is it handling, and could it do more? There are many ways to improve the observability of a system, and one of the most important is instrumentation. Just like adding extra measuring equipment to a physical system, this means adding extra code to the software. It allows the system administrator to observe the internal state of the system. This comes with a price. There is more work for the developers, increased complexity, and potential performance degradation caused by the collection and processing of additional data.

However, the benefits usually outweigh the costs, and the ability to inspect the system is often a critical requirement. It is also worth noting that the metrics and events gathered by instrumentation can be used for further automation, e.g. for autoscaling or sending alarms to the administrator.

Instrumentation in MongooseIM

Even before our latest release of MongooseIM, there have been multiple means to observe its behaviour:

Metrics provide numerical values of measured system properties. The values change over time, and the metric can present current value, sum from a sliding window, or a statistic (histogram) of values from a given time period. Prior to version 6.3, MongooseIM used to store such metrics with the help of the exometer library. To view the metrics, one had to configure an Exometer exporter, which would periodically send the metrics to an external service using the Graphite protocol. Because of the protocol, the metrics would be exported to Graphite or InfluxDB version 1. One could also query a limited subset of metrics using our GraphQL API (or the legacy REST API) or with the command line interface. Alternatively, metrics could be retrieved from the Erlang shell of a running MongooseIM node.

Logs are another type of instrumentation present in the code. They inform about events occurring in the system and since version 4, they are events with extensible map-like structure and can be formatted e.g. as plain text or JSON. Subsequently, they can be shown in the console or stored in files. You can also set up a log management system like the Elastic (ELK) Stack or Splunk – see the documentation for more details.

Prior to version 6.3.0, the instrumented code needed to separately call the log and metric API. Updating a metric and logging an event required two distinct function calls. Moreover, if there were multiple metrics (e.g. execution time and total number of calls), there would be multiple function calls required. The main issue of this solution was however the hardcoding of Exometer as the metric library and the limitation of the Graphite protocol used to push the metrics to external services.

Instrumentation rework in MongooseIM 6.3

The lack of support for the modern and widespread Prometheus protocol was one of the main reasons for the complete rework of instrumentation in version 6.3, which is summarised in the following diagram:

The most noticeable difference is that in the instrumented code, there is just one event emitted. Such an event is identified by its name and a key-value map of labels and contains measurements (with optional metadata) organised in a key-value map. Each event has to be registered before its instances are emitted with particular measurements. The point of this preliminary step is not only to ensure that all events are handled but also to provide additional information about the event, e.g. the measurement keys that will be used to update metrics. Emitted events are then handled by configurable handlers. Currently, there are three such handlers. Exometer and Logger work similarly as before, but there is a new Prometheus handler as well, which stores the metrics internally in a format compatible with Prometheus and exposes them over an HTTP API. This means that any external service can now scrape the metrics using the Prometheus protocol. The primary case would be to use Prometheus for metrics collection, and a graphical tool like Grafana for display. If you however prefer InfluxDB version 2, you can easily configure a scraper, which would periodically put new data into InfluxDB.

Apart from supporting the Prometheus protocol, additional benefits of the new solution include easier configuration, extensibility, and the ability to add more handlers in the future. You can also have multiple handlers enabled simultaneously, allowing you to gradually change your metric backend from Exometer to Prometheus. Conversely, you can also disable all instrumentation, which was not possible prior to version 6.3. Although it might make little sense at first glance, because it can render the system a black box, it can be useful to gain extra performance in some cases, e.g. if the external metrics like CPU usage are enough, in case of an isolated embedded system, or if resources are very limited.

There are more options possible, and you can find them in the documentation. You can also find more details and examples of instrumentation in the detailed blog post.

CockroachDB – a database that scales with MongooseIM

MongooseIM works best when paired with a relational database like PostgreSQL or MySQL, enabling easy cluster node discovery with CETS and persistent storage for users’ accounts, archived messages and other kinds of data. Although such databases are not horizontally scalable out of the box, you can use managed solutions like Amazon Aurora, AlloyDB or Azure Cosmos DB for PostgreSQL. The downsides are the possible vendor lock-in and the fact that you cannot host and manage the DB yourself. With version 6.3 however, the possibilities are extended to CockroachDB. This PostgreSQL-compatible distributed database can be used either as a provider-independent cloud-based solution or as an internally hosted cluster. You can instantly set it up in your local environment and take advantage of the horizontal scalability of both MongooseIM and CockroachDB. If you want to learn how to deploy both MongooseIM and CockroachDB in Kubernetes, see the documentation for CockroachDB and the Helm chart for MongooseIM, together with our recent blog post about setting up an auto-scalable cluster. If you are interested in having an auto-scalable solution deployed for you, please consider our MongooseIM Autoscaler.

What’s next?

You can read more about MongooseIM 6.3 in the detailed blog post. We also recommend visiting our product page to see the possible options of support and the services we offer. You can also try the server out at trymongoose.im.

Read about Erlang Solutions as sponsor of the XSF.

November 15, 2024 00:00

November 14, 2024

Erlang Solutions

MongooseIM 6.3: Prometheus, CockroachDB and more

MongooseIM is a scalable, efficient, high-performance instant messaging server using the proven, open, and extensible XMPP protocol. With each new version, we introduce new features and improvements. For example, version 6.2.0 introduced our new CETS in-memory storage, making setup and autoscaling in cloud environments easier than before (see the blog post for details). The latest release 6.3.0 is no exception. The main highlight is the complete instrumentation rework, allowing seamless integration with modern monitoring solutions like Prometheus. 

Additionally, we have added CockroachDB to the list of supported databases, so you can now let this highly scalable database grow with your applications while avoiding being locked into your cloud provider.

Observability and instrumentation

In software engineering, observability is the ability to gather data from a running system to figure out what is going inside: is it working as expected? Does it have any issues? How much load is it handling, and could it do more? There are many ways to improve the observability of a system, and one of the most important is instrumentation. Just like adding extra measuring equipment to a physical system, this means adding additional code to the software. It allows the system administrator to observe the internal state of the system. This comes with a price. There is more work for the developers, increased complexity, and potential performance degradation caused by the collection and processing of additional data.

However, the benefits usually outweigh the costs, and the ability to inspect the system is often a critical requirement. It is also worth noting that the metrics and events gathered by instrumentation can be used for further automation, e.g. for autoscaling or sending alarms to the administrator.

Instrumentation in MongooseIM

Even before our latest release of MongooseIM, there have been multiple means to observe its behaviour:

Metrics provide numerical values of measured system properties. The values change over time, and the metric can present current value, sum from a sliding window, or a statistic (histogram) of values from a given time period. Prior to version 6.3, MongooseIM used to store such metrics with the help of the exometer library. To view the metrics, one had to configure an Exometer exporter, which would periodically send the metrics to an external service using the Graphite protocol. Because of the protocol, the metrics would be exported to Graphite or InfluxDB version 1. One could also query a limited subset of metrics using our GraphQL API (or the legacy REST API) or with the command line interface. Alternatively, metrics could be retrieved from the Erlang shell of a running MongooseIM node.

Logs are another type of instrumentation present in the code. They inform about events occurring in the system and since version 4, they are events with extensible map-like structure and can be formatted e.g. as plain text or JSON. Subsequently, they can be shown in the console or stored in files. You can also set up a log management system like the Elastic (ELK) Stack or Splunk – see the documentation for more details.

The diagram below shows how these two types of instrumentation can work together:

The first observation is that the instrumented code needs to separately call the log and metric API. Updating a metric and logging an event requires two distinct function calls. Moreover, if there are multiple metrics (e.g. execution time and total number of calls), there would be multiple function calls required. There is potential for inconsistency between metrics, or between metrics and logs, because an error could happen between the function calls. The main issue of this solution is however the hardcoding of Exometer as the metric library and the limitation of the Graphite protocol used to push the metrics to external services.

Instrumentation rework in MongooseIM 6.3

The lack of support for the modern and widespread Prometheus protocol was one of the main reasons for the complete rework of instrumentation in version 6.3. Let’s see the updated diagram of MongooseIM instrumentation:

The most noticeable difference is that in the instrumented code, there is just one event emitted. Such an event is identified by its name and a key-value map of labels and contains measurements (with optional metadata) organised in a key-value map. Each event has to be registered before its instances are emitted with particular measurements. The point of this preliminary step is not only to ensure that all events are handled but also to provide additional information about the event, e.g. the measurement keys that will be used to update metrics. Emitted events are then handled by configurable handlers. Currently, there are three such handlers. Exometer and Logger work similarly as before, but there is a new Prometheus handler as well, which stores the metrics internally in a format compatible with Prometheus and exposes them over an HTTP API. This means that any external service can now scrape the metrics using the Prometheus protocol. The primary case would be to use Prometheus for metrics collection, and a graphical tool like Grafana for display. If you however prefer InfluxDB version 2, you can easily configure a scraper, which would periodically put new data into InfluxDB.

As you can see in the diagram, logs can be also emitted directly, bypassing the instrumentation API. This is the case for multiple logs in the system, because often there is no need for any metrics, and a log message is enough. In the future though, we might decide to fully replace logs with instrumentation events, because they are more extensible.

Apart from supporting the Prometheus protocol, additional benefits of the new solution include easier configuration, extensibility, and the ability to add more handlers in the future. You can also have multiple handlers enabled simultaneously, allowing you to gradually change your metric backend from Exometer to Prometheus. Conversely, you can also disable all instrumentation, which was not possible prior to version 6.3. Although it might make little sense at first glance, because it can render the system a black box, it can be useful to gain extra performance in some cases, e.g. if the external metrics like CPU usage are enough, in case of an isolated embedded system, or if resources are very limited.

The table below compares the legacy metrics solution with the new instrumentation framework:

SolutionLegacy: mongoose_metricsNew: mongoose_instrument
Intended useMetricsMetrics, logs, distributed tracing, alarms, …
Coupling with handlersTight: hardcoded Exometer logic, one metric update per function callLoose: events separated from configurable handlers
Supported handlersExometer is hardcodedExometer, Prometheus, Log
Events identified byExometer metric name (a list)Event name, Labels (key-value map)
Event valueSingle-dimensional numerical valueMulti-dimensional measurements with metadata
Consistency checksNone – it is up to the implementer to verify that the correct metric is created and updatedPrometheus HTTP endpoint, legacy GraphQL / CLI / REST for Exometer
APIGraphQL / CLI and RESTPrometheus HTTP endpoint,legacy GraphQL / CLI / REST for Exometer

There are about 140 events in total, and some of them have multiple dimensions. You can find an overview in the documentation. In terms of dashboards for tools like Grafana, we believe that each use case of MongooseIM deserves its own. If you are interested in getting one tailored to your needs, don’t hesitate to contact us.

Using the instrumentation

Let’s see the new instrumentation in action now. Starting with configuration, let’s examine the new additions to the default configuration file:

[[listen.http]]
  port = 9090
  transport.num_acceptors = 10

  [[listen.http.handlers.mongoose_prometheus_handler]]
    host = "_"
    path = "/metrics"

(...)

[instrumentation.prometheus]

[instrumentation.log]

The first section, [[listen.http]], specifies the Prometheus HTTP endpoint. The following [instrumentation.*] sections enable the Prometheus and Log handlers with the default settings – in general, instrumentation events are logged on the DEBUG level, but you can change it. This configuration is all you need to see the metrics at http://localhost:9091/metrics when you start MongooseIM.

As a second example, let’s say that you want only the Graphite protocol integration. In this case, you might configure MongooseIM to use only the Exometer handler, which would push the metrics prefixed with mim to the influxdb1 host every 60 seconds:

[[instrumentation.exometer.report.graphite]]
  interval = 60_000
  prefix = "mim"
  host = "influxdb1"

There are more options possible, and you can find them in the documentation.

Tracing – ad-hoc instrumentation

There is one more type of observability available in Erlang systems, which is tracing. It enables a user to have a more in-depth look into the Erlang processes, including the functions being called and the internal messages being exchanged. It is meant to be used by Erlang developers, and should not be used in production environments because of the impact it can have on a running system. It is good to know, however, because it could be helpful to diagnose unusual issues. To make tracing more user-friendly, MongooseIM now includes erlang_doctor with some MongooseIM-specific utilities (see the tr_util module). This tool provides low-level ad-hoc instrumentation, allowing you to instrument functions in a running system, and gather the resulting data in an in-memory table, which can be then queried, processed, and – if needed – exported to a file. Think of it as a backup solution, which could help you diagnose hidden issues, should you ever experience one.

CockroachDB – a database that scales with MongooseIM

MongooseIM works best when paired with a relational database like PostgreSQL or MySQL, enabling easy cluster node discovery with CETS and persistent storage for users’ accounts, archived messages and other kinds of data. Although such databases are not horizontally scalable out of the box, you can use managed solutions like Amazon Aurora, AlloyDB or Azure Cosmos DB for PostgreSQL. The downsides are the possible vendor lock-in and the fact that you cannot host and manage the DB yourself. With version 6.3 however, the possibilities are extended to CockroachDB. This PostgreSQL-compatible distributed database can be used either as a provider-independent cloud-based solution or as an internally hosted cluster. You can instantly set it up in your local environment and take advantage of the horizontal scalability of both MongooseIM and CockroachDB. If you want to learn how to deploy both MongooseIM and CockroachDB in Kubernetes, see the documentation for CockroachDB and the Helm chart for MongooseIM, together with our recent blog post about setting up an auto-scalable cluster. If you are interested in having an auto-scalable solution deployed for you, please consider our MongooseIM Autoscaler.

Summary

MongooseIM 6.3.0 opens new possibilities for observability – the Prometheus protocol is supported instantly with a new reworked instrumentation layer underneath, guaranteeing ease of future extensions. Regarding database integration, you can now use CockroachDB to store all your persistent data. Apart from these changes, the latest version introduces a multitude of improvements and updates – see the release notes for more information. As the next step, we recommend visiting our product page to see the possible options of support and the services we offer. You can also try the server out at trymongoose.im. In any case, should you have any further questions, feel free to contact us.

The post MongooseIM 6.3: Prometheus, CockroachDB and more appeared first on Erlang Solutions.

by Pawel Chrzaszcz at November 14, 2024 10:16

November 12, 2024

ProcessOne

Docker: Keep ejabberd automagically updated with Watchtower

This blog post will guide you through the process of setting up an ejabberd Community Server using Docker and Docker Compose, and will also introduce Watchtower for automatic updates. This approach ensures that your configuration remains secure and up to date.

Furthermore, we will examine the potential risks associated with automatic updates and suggest Diun as an alternative tool for notification-based updates.

1. Prerequisites

Please ensure that Docker and Docker Compose are installed on your system.
It would be beneficial to have a basic understanding of Docker concepts, including containers, volumes, and bind-mounts.

2. Set up ejabberd in a docker container

Let’s first create a minimal Docker Compose configuration to start an ejabberd instance.

2.1: Prepare the directories

For this setup, we will create a directory structure to store the configuration, database, and logs. This will assist in maintaining an organised setup, facilitating data management and backup.

mkdir ejabberd-setup && cd ejabberd-setup
touch docker-compose.yml
mkdir conf
touch conf/ejabberd.yml
mkdir database
mkdir logs

This should give you the following structure:

ejabberd-setup/
├── conf
│   └── ejabberd.yml
├── database
├── docker-compose.yml
└── logs

To verify the structure, use the tree command. It is a very useful tool which we use on a daily basis.

Set permissions

Since we&aposll be using bind mounts in this example, it’s important to ensure that specific directories (like database and logs) have the correct permissions for the ejabberd user inside the container (UID 9000, GID 9000).

Customize or skip depending on your needs:

sudo chown -R 9000:9000 database
sudo chown -R 9000:9000 logs

Based on this Issue.

2.2: The docker-compose.yml file

Now, create a docker-compose.yml file inside, containing:

services:
  ejabberd:
    image: ejabberd/ecs:latest
    container_name: ejabberd
    ports:
      - "5222:5222"  # XMPP Client
      - "5280:5280"  # Web Admin Interface, optional
    volumes:
      - ./database:/home/ejabberd/database
      - ./ejabberd.yml:/home/ejabberd/conf/ejabberd.yml
      - ./logs:/home/ejabberd/logs
    restart: unless-stopped

2.3: The ejabberd.yml file

A basic configuration file for ejabberd will be required. we will name it conf/ejabberd.yml.

loglevel: 4
hosts:
- "localhost"

acl:
  admin:
    user:
      - "admin@localhost"

access_rules:
  local:
    allow: all

listen
  -
    port: 5222
    module: ejabberd_c2s

  -
    port: 5280                       # optional
    module: ejabberd_http            # optional
    request_handlers:                # optional
      "/admin": ejabberd_web_admin   # optional

Did you know? Since 23.10, ejabberd now offers users the option to create or update the relevant MySQL, PostgreSQL or SQLite tables automatically with each update. You can read more about it here.

3: Starting ejabberd

Finally, we&aposre set: you can run the following command to start your stack: docker-compose up -d

Your ejabberd instance should now running in a Docker container! Good job! 🎉

From there, customize ejabberd to your liking! Naturally, in this example we&aposre going to keep ejabberd in its barebones configuration, but we recommend that you configure it as you wish at this stage, to suit your needs (Domains, SSL, favorite modules, chosen database, admin accounts, etc.)

Example: You could register your admin account at this stage

To use the admin interface, you need to create an admin account. You can do so by running the following command:

$ docker exec -it ejabberd bin/ejabberdctl register admin localhost very_secret_password
> User admin@localhost successfully registered

Once this step is complete, you will then be able to access the web admin interface at http://localhost:5280/admin.

4. Set up automatic updates

Finally, we come to the most interesting part: how do I keep my containers up to date?

To keep your ejabberd instance up-to-date, you can use Watchtower, a Docker container that automatically updates other containers when new versions are available.

Warning: Auto-updates are undoubtedly convenient, but they can occasionally cause issues if an update includes breaking changes. Always test updates in a staging environment and back up your data before enabling auto-updates. Further information can be found at the end of this post.

If greater control over updates is required (for example, for mission-critical production servers or clusters), we recommend using Diun, which can notify you of available updates and allow you to decide when to apply them.

4.1: Add Watchtower to your docker-compose.yml

To include Watchtower, add it as a service in docker-compose.yml:

services:
  ejabberd:
    image: ejabberd/ecs:latest
    container_name: ejabberd
    ports:
      - "5222:5222"  # XMPP Client
      - "5280:5280"  # Web Admin Interface, optional
    volumes:
      - ./database:/home/ejabberd/database
      - ./ejabberd.yml:/home/ejabberd/conf/ejabberd.yml
      - ./logs:/home/ejabberd/logs
    restart: unless-stopped

  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_POLL_INTERVAL=3600 # Sets how often Watchtower checks for updates (in seconds).
      - WATCHTOWER_CLEANUP=true # Ensures old images are cleaned up after updating.
    restart: unless-stopped

Watchtower offers a wide range of additional features, including the ability to set up notifications, exclude specific containers, and more. For further information, please refer to the Watchtower Docs.

Once the docker-compose.yml has been updated, please bring it up using the following command: docker-compose up -d

And.... here you go, you&aposre all set!

5. Best Practices & closing words

Now Watchtower will now perform periodic checks for updates to your ejabberd container and apply them automatically.

Well to be fair, by default if other containers are running on the same server, Watchtower will also update them. This behaviour can be controlled with the help of environment variables (see Container Selection), which will assist in excluding containers from updates.


One important thing to understand is that Watchtower will only update containers tagged with the :latest tag.

In an environment with numerous Docker containers, using the latest tag streamlines the process of automatic updates. However, it may introduce unanticipated changes with each new, disruptive update. Ideally, we recommend always setting a speficic version like ejabberd/ecs:24.10 and deciding how/when to update it manually (especially if you&aposre into infra-as-code).

However, we recognise that some users may prefer the convenience of automatic updates, personnally that&aposs what I do my homelab but I&aposm not scared to dig in if stuff breaks.


tl;dr: For a small community server/homelab/personnal instance, Watchtower will help keep things up to date with minimal effort. However, for bigger production environments, it is advisable to tag specific versions to ensure greater control and resilience and update them manually.

With this setup, you now have a fully functioning XMPP server using ejabberd, with automatic updates. You can now start building your chat applications or integrate it with your existing services! 🚀

by Adrien at November 12, 2024 14:15

November 05, 2024

ProcessOne

Thoughts on Improving Messaging Protocols — Part 2, Matrix

Thoughts on Improving Messaging Protocols — Part 2, Matrix

In the first part of this blog post, I explained how the Matrix protocol works, contrasted its design philosophy with XMPP, and discussed why these differences lead to performance costs in Matrix. Matrix processes each conversation as a graph of events, merged in real-time[1].

Merge operations can be costly in Matrix for large rooms, affecting both database storage and load and disk usage when memory is exhausted, reaching swap level.

That said, there is still room for improvement in the protocol. We have designed and tested slight changes that could make Matrix much more efficient for large rooms.

A Proposal to Simplify and Speed Up Merge Operations

Here is the rationale behind a proposal we have made to simplify and speed up merge operations:

State resolution v2 uses certain graph algorithms, which can result in at least linear processing time for the number of state events in a room’s DAG, creating a significant load on servers.

The goal of this issue is to discuss and develop changes to state resolution to achieve O(n log ⁡ n) total processing time when handling a room with n state events (i.e., O(log ⁡ n) on average) in realistic scenarios, while maintaining a good user experience.

The approach described below is closer to state resolution v1 but seeks to address state resets in a different way.

For more detail, you can read our proposal on the Matrix spec tracker: Make state resolution faster.

In simpler terms, we propose adding a version associated with each event_id to simplify conflict management and introduce a heuristic that skips traversal of large parts of the graph.

Impact of the Proposal

From our initial assessment, in a very large room — such as one with 100,000 members — our approach could improve processing performance by 100x to 1000x, as the current processing cost scales with the number of users in the room. This improvement would enable smoother conversations, reduced lag, and more responsive interactions for end-users, while also reducing server infrastructure load and resource usage.

While our primary goal is to improve performance in very large rooms, these changes benefit all users by reducing overall server load and improving processing times across various room sizes.

We plan to implement this improvement in our own code to evaluate its real-world effectiveness while the Matrix team considers its potential value for the reference protocol.


  1. For those who remember, a conversation in Matrix is similar to the collaborative editing protocol built on top of XMPP for the Google Wave platform.

by Mickaël Rémond at November 05, 2024 13:53

The XMPP Standards Foundation

XMPP Summit 27

The XMPP Standards Foundation (XSF) will hold its 27th XMPP Summit in Brussels, Belgium next year again! These are the two days preceding FOSDEM 2025. The XSF invites everyone interested in development of the XMPP protocol to attend, and discuss all things XMPP in person and remote!

The Summit

The XMPP Summit is a two day event for the people who write and implement XMPP extensions (XEPs). The event is not a conference and besides small and short lightning talks there are no long presentations. The participants, where everyone is welcome, are sitting at a round table to discuss and active participation is encouraged. Similar to an unconference at the beginning all participants can suggest topics and others can indicate via votes whether or not they are interested in that topic. Afterwards a rough order of topics is established that will be followed in moderation with the participants.

If you have ever followed a thread on the standards mailing list or participated in a discussion on the public XSF channel you should be familiar with this, now only in person. The different topics are broken up by short breaks that are great for networking and getting to know other XMPP developers. Still, if you cannot participate, we will also provide an online way of joining the discussion.

Agreeing on a common strategy or even establishing a rough priority for certain features in our decentral and interoperable technology and protocol can be hard. In-person events do a lot in getting us on the same page and as an XMPP developer (e.g. client, server, gateway) we strongly encourage you to come to the summit. (Info: To get the most out of the summit you should have a background in reading (and maybe even writing) XEPs). If you are simply an enthusiastic user or admin we regularly have booths at various conferences (FOSDEM, CLT, FrOSCon, …) that are a great opportunity to meet us, too.

If we gained your attention, we will then hopefully see you at the XMPP Summit 27. Read on!

Time & Address

The venue will take place at the Thon Hotel EU including coffee break (from 09:00 o’clock) and lunch (13:00 to 14:00 o’clock) paid by the XSF in the hotel restaurant.

Date: Thursday 30th - Friday 31st, January 2025 Time: 09:00 - 17:00 o’clock (CET) (both days)

Thon Hotel EU
Room: Germany
Wetstraat / Rue de la Loi 75
1040 Brussels
Openstreetmap

Furthermore, the XSF will have its Summit Dinner on Thursday night which is paid for all participating XSF members. Everyone else is of course invited to participate, however at their own expense. Please reach out if you are participating as a non-member (see list below).

Participation

So that we can make final arrangements with the hotel, you must register before Wednesday 15th January 2025!

Please note that, although we welcome everyone to join, you must announce your attendance beforehand, as the venue is not publicly accessible. If you’re interested in attending, please make yourself known by filling out your details on the wiki page for Summit 27. To edit the page, reach out to an XSF member to enter and update your details or you’ll need a wiki account, which we’ll happily provide for you. Reach out via communication below listed. When you sign-up please book the accomodation and your travel. Please also unsign if you will not attend anymore.

Please also consider signing up if you plan to:

Communication

To ensure you receive all the relevant information, updates and announcements about the event, make sure that you’re signed up to the Summit mailing list and the Summit chatroom (Webview).

Spread the word also via our communication channels such as Mastodon and Twitter.

Sponsors

We would like to thank Isode for sponsoring the XMPP Summit again.

We also would like to thank Alexander Gnauck for sponsoring the XSF Dinner again.

Also many thanks to Daniel Gultsch investing time and resources to help organising the event again!

We appreciate support via sponsoring or even XSF sponsorship so that we can keep the event open and accessible for everyone. If you are interested, please contact the XSF Board.

We are really excited seeing many people already signing up. Looking forward to meeting all of you!

The XMPP Standards Foundation

November 05, 2024 00:00

The XMPP Newsletter October 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of October 2024.

XSF Announcements

XSF Membership

If you are interested in joining the XMPP Standards Foundation as a member, please apply until November 24th, 2024!.

XMPP Summit 27 & FOSDEM 2025

The XSF is planning the XMPP Summit 27, which is to take place on January 30th & 31st 2025 in Brussels (Belgium, Europe). Following the Summit, the XSF is also planning to be present at FOSDEM 2025, which takes place on February 1st & 2nd 2025. Find all the details in our Wiki. Please sign-up now if you are planning to attend, since this helps organizing. The event is of course open for everyone interested to participate. Spread the word within your circles!

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • Berlin XMPP Meetup [DE / EN]: monthly meeting of XMPP enthusiasts in Berlin, every 2nd Wednesday of the month at 6pm local time
  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

XMPP Articles

XMPP Software News

XMPP Clients and Applications

A basic XMPP messaging client for KaiOS

A basic XMPP messaging client for KaiOS

  • Mellium co-op has released Communique, version 0.0.1 of its instant messaging client with a terminal-based user interface. This initial release features 1:1 and multi-user chat support, HTTP file upload, ad-hoc commands, and chat history.
Communique: Initial release with features including 1:1 and multi-user chat support

Communique: Initial release with features including 1:1 and multi-user chat support

XMPP Servers

  • ejabberd 24.10: The “Bidi” Stream Release has been released. This is a major release packed with substantial improvements and support for important extensions specified by the XMPP Standard Foundation (XSF). The improvements span enhanced security and streamlined connectivity—all designed to make ejabberd more powerful and easier to use than ever.

XMPP Libraries & Tools

  • Ignite Realtime community:
    • Smack 4.5.0-beta5 released!. The Ignite Realtime developer community is happy to announce that Smack 4.5 entered its beta phase. Smack is a XMPP client API written in Java that is able to run on Java SE and Android. Smack 4.5 APIs is considered stable, however small adjustments are still possible during the beta phase.
  • go-xmpp versions 0.2.2, 0.2.3 and 0.2.4 have been released.
  • go-sendxmpp versions 0.11.3 and 0.11.4 have been released.
  • Slidge v0.2.0, the XMPP (puppeteer) gateway library in python that makes writing gateways to other chat networks (legacy modules) as frictionless as possible has been released.
  • Join Jabber added two new entries to their growing list of XMPP integration tutorials: Forgejo and Sharkey!
  • QXmpp versions 1.8.2 and 1.8.3 have been released.

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

New

  • Version 0.1.0 of XEP-0495 (Happy Eyeballs)
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 1.6.2 of XEP-0198 (Stream Management)
    • Clarify server enabling stream management without requested resume functionality. (gk)
  • Version 0.3.0 of XEP-0394 (Message Markup)
    • Add support for strong emphasis, declaring language on code blocks and making lists ordered. (lmw)
  • Version 0.1.3 of XEP-0491 (WebXDC)
    • Clarifications and wording
    • Better references for WebXDC spec (spw)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • XEP-0490: Message Displayed Synchronization

Stable

  • No XEP moved to Stable this month.

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Schimon Zachary, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: Gonzalo Raúl Nemmi
  • German: xmpp.org
    • Translators: Millesimus

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

To unsubscribe from this list, please log in first. If you have not previously logged in, you may need to set up an account with the appropriate email address.

License

This newsletter is published under CC BY-SA license.

November 05, 2024 00:00

November 04, 2024

Prosodical Thoughts

New server, new sponsor

It shouldn’t surprise you, but here we have an obsession for self-hosting. We fought off many requests to migrate our hosting to Github (even before it was cool to hate Github - Prosody and Github were both founded in the same year!).

As a result, we self-host our XMPP service (of course), our website, our code repos, our issue tracker, package repository and our CI and build system.

This is not always easy - our project has always been a rather informal collaboration of individuals, meaning it’s not a commercial venture and we don’t have any employees. For better or worse, we’re firmly rooted in the free and open-source software principles that focus on growing communities rather than profits.

As a result, we love working with people who have similar roots and values.

For many years we had a happy home for our servers with Bytemark, who were very supportive of open-source projects, including ours (they used Prosody themselves for communication, and some of their employees contributed to the project). We are grateful to them for sponsoring the hosting of our build server for many years. However, all good things come to an end - and when Bytemark was acquired in recent years by the much larger iomart Group PLC enterprise as part of a string of other acquisitions, we knew our good times with them were likely drawing to a close.

This was recently confirmed, as we and the other remaining Bytemark customers were notified that all services are being moved to another location and another of iomart’s brands. We also received an email to inform us that our sponsorship would no longer be in effect after this transition. The monthly price we were told we would have to pay for the server was many multiples of what an equivalent server would cost by today’s standards, even if we had income to pay for it.

So, we bid a final farewell to Bytemark! But as one chapter ends, another can begin.

At the time of the acquisition, many ex-Bytemark customers recommended various alternatives. However among those, one independent provider, Mythic Beasts, really stood out. You may have stumbled across them already, for their innovative Raspberry Pi hosting and handling Raspberry Pi launch announcements on a stack of Raspberry Pi devices, or you may have come across them on the Fediverse via their (self-hosted, of course) @beasts Mastodon account. As well as Raspberry Pi hosting, of course they also offer conventional (dedicated and virtual) servers, DNS, traditional web space, and more.

Mythic Beasts logo

Mythic Beasts turned out to be just what we were looking for - a no-nonsense service-driven provider where you’ll find founders answering support tickets and where providing amazing service and having fun while doing so are deemed more important than maximizing growth and shareholder value.

Running services with a hosting provider is a kind of partnership that requires placing a certain amount of trust. Trust that they are competent, that it’s easy to contact someone if things go wrong, and that their values are aligned with yours for the long term. It’s hard to find providers that tick all these boxes.

Having used Mythic Beasts for a few things personally in recent years, I felt increasingly confident they would be a good home for Prosody’s infrastructure too. In fact they’ve been very supportive and understanding from the moment I reached out about Prosody’s situation, and have generously provided us with capacity to migrate all our services across and retire our old servers. You may have noticed a few blips in recent weeks as we did just that. Thanks for bearing with us!

All our services are now running smoothly on VMs provided by Mythic Beasts, and we can’t thank them enough as they enable us to continue our journey. It feels great to be with a provider that not only knows but cares about things like open-source, environmental impact, as well as IPv6, DNSSEC and all the other internet tech we care about too.

For those of you curious, here’s a list (probably not exhaustive) of things we are currently running as part of the project’s infrastructure:

If you notice any post-migration issues with our site or services, drop by the chat and let us know! Also, if you’re in need of hosting, now you know where we would suggest looking first :)

by The Prosody Team at November 04, 2024 10:00

November 01, 2024

Ignite Realtime Blog

Openfire 4.9.1 release

The Ignite Realtime community is happy to be able to announce the immediate availability of version 4.9.1 of Openfire, its cross-platform real-time collaboration server based on the XMPP protocol!

4.9.1 is a bugfix and maintenance release. Among its most important fixes is one for a memory leak that affected all recent versions of Openfire (but was likely noticeable only on those servers that see high volume of users logging in and out). The complete list of changes that have gone into this release can be seen in the change log.

Please give this version a try! You can download installers of Openfire here. Our documentation contains an upgrade guide that helps you update from an older version.

The integrity of these artifacts can be checked with the following sha256sum values:

8c489503f24e35003e2930873037950a4a08bc276be1338b6a0928db0f0eb37d  openfire-4.9.1-1.noarch.rpm
1e80a119c4e1d0b57d79aa83cbdbccf138a1dc8a4086ac10ae851dec4f78742d  openfire_4.9.1_all.deb
69a946dacd5e4f515aa4d935c05978b5a60279119379bcfe0df477023e7a6f05  openfire_4_9_1.dmg
c4d7b15ab6814086ce5e8a1d6b243a442b8743a21282a1a4c5b7d615f9e52638  openfire_4_9_1.exe
d9f0dd50600ee726802bba8bc8415bf9f0f427be54933e6c987cef7cca012bb4  openfire_4_9_1.tar.gz
de45aaf1ad01235f2b812db5127af7d3dc4bc63984a9e4852f1f3d5332df7659  openfire_4_9_1_x64.exe
89b61cbdab265981fad4ab4562066222a2c3a9a68f83b6597ab2cb5609b2b1d7  openfire_4_9_1.zip

We would love to hear from you! If you have any questions, please stop by our community forum or our live groupchat. We are always looking for volunteers interested in helping out with Openfire development!

For other release announcements and news follow us on Mastodon or X

6 posts - 4 participants

Read full topic

by guus at November 01, 2024 19:54

October 31, 2024

Erlang Solutions

Why you should consider machine learning for business

Adopting machine learning for business is necessary for companies that want to sharpen their competitive industries. With the global market for machine learning projected to reach an impressive $210 billion by 2030, businesses are keen to seek active solutions that streamline processes and improve customer interactions.

While organisations may already employ some form of data analysis, traditional methods can need more sophistication to address the complexities of today’s market. Businesses that consider optimising machines unlock valuable data insights, make accurate predictions and deliver personalised experiences that truly resonate with customers, ultimately driving growth and efficiency.

What is Machine Learning?

Machine learning (ML) is a subset of artificial intelligence (AI). It uses machine learning algorithms, designed to learn from data, identify patterns, and make predictions or decisions, without explicit programming. By analysing patterns in the data, a machine learning algorithm identifies key features that define a particular data point, allowing it to apply this knowledge to new, unseen information.

Fundamentally data-driven, machine learning relies on vast information to learn, adapt, and improve over time. Its predictive capabilities allow models to forecast future outcomes based on the patterns they uncover. These models are generalisable, so they can apply insights from existing data to make decisions or predictions in unfamiliar situations.

You can read more about machine learning and AI in our previous post.

Approaches to Machine Learning

Machine learning for business typically involves two key approaches: supervised and unsupervised learning, each suited to different types of problems. Below, we explain each approach and provide examples of machine learning use cases where these techniques are applied effectively.

  • Supervised Machine Learning: This approach demands labelled data, where the input is matched with the correct output. The algorithms learn to map inputs to outputs based on this training set, honing their accuracy over time.
  • Unsupervised Machine Learning: In contrast, unsupervised learning tackles unlabelled data, compelling the algorithm to uncover patterns and structures independently. This method can involve tasks like clustering and dimensionality reduction. While unsupervised techniques are powerful, interpreting their results can be tricky, leading to challenges in assessing whether the model is truly on the right track.
Machine learning for business Supervised vs unsupervised learning

Example of Supervised vs unsupervised learning

Supervised learning uses historical data to make predictions, helping businesses optimise performance based on past outcomes. For example, a retailer might use supervised learning to predict customer churn. By feeding the algorithm data such as customer purchase history and engagement metrics, it learns to identify patterns that indicate a high risk of churn, allowing the business to implement proactive retention strategies.

Unsupervised learning, on the other hand, uncovers hidden patterns within data. It is particularly useful for discovering new customer segments without prior labels. For instance, an e-commerce platform might use unsupervised learning to group customers by their browsing habits, discovering niche audiences that were previously overlooked.

The Impact of Machine Learning on Business

A recent survey by McKinsey revealed that 56% of organisations surveyed are using machine learning in at least one business function to optimise their operations. This growing trend shows how machine learning for business is becoming integral to staying competitive.

The AI market as a whole is also on an impressive growth trajectory, projected to reach USD 407.0 billion by 2027

Machine learning for business AI Global Market Forecast to 2030

AI Global Market Forecast to 2030

We’re expected to see an astounding compound growth rate (CAGR) of 35.7% by 2030, proving that business analytics is no longer just a trend; it’s moving into a core component of modern enterprises.

Machine Learning for Business Use Cases

Machine learning can be used in numerous ways across industries to enhance workflows. From image recognition to fraud detection, businesses are actively using AI to streamline operations.

Image Recognition

Image recognition, or image classification is a powerful machine learning technique used to identify and classify objects or features in digital images. 

Artificial intelligence (AI) and machine learning (ML) are revolutionising image recognition systems by uncovering hidden patterns in images that may not be visible to the human eye. This technology allows these systems to make independent and informed decisions, significantly reducing the reliance on human input and feedback. 

As a result, visual data streams can be processed automatically at an ever-increasing scale, streamlining operations and enhancing efficiency. By harnessing the power of AI, businesses can leverage these insights to improve their decision-making processes and gain a competitive edge in their respective markets.

It plays a crucial role in tasks like pattern recognition, face detection, and facial recognition, making it indispensable in security and social media sectors.

Fraud Detection

With financial institutions handling millions of transactions daily, distinguishing between legitimate and fraudulent activity can be a challenge. As online banking and cashless payments grow, so too has the volume of fraud. A 2023 report from TransUnion revealed a 122% increase in digital fraud attempts in the US between 2019 and 2022. 

Machine learning helps businesses by flagging suspicious transactions in real-time, with companies like Mastercard using AI to predict and prevent fraud before it occurs, protecting consumers from potential theft.

Speech Recognition

Voice commands have become a common feature in smart devices, from setting timers to searching for shows. 

Thanks to machine learning, devices like Google Nest speakers and Amazon Blink security systems can recognise and act on voice inputs, making hands-free operation more convenient for users in everyday situations.

Improved Healthcare

Machine learning in healthcare has led to major improvements in patient care and medical discoveries. By analysing vast amounts of healthcare data, machine learning enhances the accuracy of diagnoses, optimises treatments, and accelerates research outcomes.

For instance, AI systems are already employed in radiology to detect diseases in medical images, such as identifying cancerous growths. Additionally, machine learning is playing a crucial role in genomic research by uncovering patterns linked to genetic disorders and potential therapies. These advancements are paving the way for improved diagnostics and faster medical research, offering tremendous potential for the future of healthcare.

Key applications of machine learning in healthcare include:

  • Developing predictive modelling
  • Improving diagnostic accuracy
  • Personalising patient care
  • Automating clinical workflows
  • Enhancing patient interaction

Machine learning in healthcare utilises algorithms and statistical models to analyse large medical datasets, facilitating better decision-making and personalised care. As a subset of AI, machine learning identifies patterns, makes predictions, and continuously improves by learning from data. Different types of learning, including supervised and unsupervised learning, find applications in disease classification and personalised treatment recommendations.

Chatbots

Many businesses rely on customer support to maintain satisfaction. However, staffing trained specialists can be expensive and inefficient. AI-powered chatbots, equipped with natural language processing (NLP), assist by handling basic customer queries. This frees up human agents to focus on more complicated issues. Companies can provide more efficient and effective support without overburdening their teams.

Each of these applications offers businesses the chance to streamline operations and improve customer experiences. 

Machine Learning Case Studies

Machine learning for business is transforming industries by enabling companies to enhance their operations, improve customer experiences, and drive innovation. 

Here are a few machine learning case studies showing how leading organisations have integrated machine learning into their business strategies.

PayPal

PayPal, a worldwide payment platform, faced huge challenges in identifying and preventing fraudulent transactions. 

Machine learning for business PayPal case study


To tackle this issue, the company implemented machine learning algorithms designed for fraud detection. These algorithms analyse various aspects of each transaction, including the transaction location, the device used, and the user’s historical behaviour. This approach has significantly enhanced PayPal’s ability to protect users and maintain the integrity of its payment platform.

YouTube

YouTube has long employed machine learning to optimise its operations, particularly through its recommendation algorithms. By analysing vast amounts of historical data, YouTube suggests videos to its viewers based on their preferences. Currently, the platform processes over 80 billion data points for each user, requiring large-scale neural networks that have been in use since 2008 to effectively manage this immense dataset.

Machine learning for business YouTube case study

Dell

Recognising the importance of data in marketing, Dell’s marketing team sought a data-driven solution to enhance response rates and understand the effectiveness of various words and phrases. Dell partnered with Persado, a firm that leverages AI to create compelling marketing content. This collab led to an overhaul of Dell’s email marketing strategy, resulting in a 22% average increase in page visits and a 50% boost in click-through rates (CTR). Dell now utilises machine learning methods to refine its marketing strategies across emails, banners, direct mail, Facebook ads, and radio content.

Machine learning for business case study Dell

Tesla

Tesla employs machine learning to enhance the performance and features of its electric vehicles. A key application is its Autopilot system, which combines cameras, sensors, and machine learning algorithms to provide advanced driver assistance features such as lane centring, adaptive cruise control, and automatic emergency braking.

 case study Tesla

The Autopilot system uses deep neural networks to process vast amounts of real-world driving data, enabling it to predict driving behaviour and identify potential hazards. Additionally, Tesla leverages machine learning in its battery management systems to optimise battery performance and longevity by predicting behaviour under various conditions.

Netflix

Netflix is a leader in personalised content recommendations. It uses machine learning to analyse user viewing habits and suggest shows and movies tailored to individual preferences. This feature has proven essential for improving customer satisfaction and increasing subscription renewals. To develop this system, Netflix utilises viewing data—including viewing durations, metadata, release dates, timestamps etc. Netflix then employs collaborative filtering, matrix factorisation, and deep learning techniques to accurately predict user preferences.

case study Netflix

Benefits of Machine Learning in Business

If you’re still contemplating the value of machine learning for your business, consider the following key benefits:

Automation Across Business ProcessesMachine learning automates key business functions, from marketing to manufacturing, boosting yield by up to 30%, reducing scrap, and cutting testing costs. This frees employees from more creative, strategic tasks.
Efficient Predictive Maintenance
ML helps manufacturing predict equipment failures, reducing downtime and extending machinery lifespan, ensuring operational continuity.
Enhanced Customer Experience and Accurate Sales ForecastsRetailers use machine learning to analyse consumer behaviour, accurately forecast demand, and personalise offers, greatly improving customer experience.
Data-Driven Decision-MakingML algorithms quickly extract insights from data, enabling faster, more informed decision-making and helping businesses develop effective strategies.
Error ReductionBy automating tasks, machine learning reduces human error, so employees to focus on complex tasks, significantly minimising mistakes.
Increased Operational EfficiencyAutomation and error reduction from ML lead to efficiency gains. AI systems like chatbots boost productivity by up to 54%, operating 24/7 without fatigue.
Enhanced Decision-MakingML processes large data sets swiftly, turning information into objective, data-driven decisions, removing human bias and improving trend analysis.
Addressing Complex Business IssuesMachine learning tackles complex challenges by streamlining operations and boosting performance, enhancing productivity and scalability.


As organisations increasingly adopt machine learning, they position themselves to not only meet current demands but poise them for future innovation.

Elixir and Erlang in Machine Learning

As organisations explore machine learning tools, many are turning to Erlang and Elixir programming languages to develop customised solutions that cater to their needs. Erlang’s fault tolerance and scalability make it ideal for AI applications, as described in our blog on adopting AI and machine learning for business. Additionally, Elixir’s concurrency features and simplicity enable businesses to build high-performance AI applications. 

Learn more about how to build a machine-learning project in Elixir here.

As organisations become more familiar with AI and machine learning tools, many are turning to Erlang and Elixir programming languages to develop customised solutions that cater to their needs. 

Elixir, built on the Erlang virtual machine (BEAM), delivers top concurrency and low latency. Designed for real-time, distributed systems, Erlang prioritises fault tolerance and scalability, and Elixir builds on this foundation with a high-level, functional programming approach. By using pure functions and immutable data, Elixir reduces complexity and minimises unexpected behaviours in code. It excels at handling multiple tasks simultaneously, making it ideal for AI applications that need to process large amounts of data without compromising performance. 

Elixir’s simplicity in problem-solving also aligns perfectly with AI development, where reliable and straightforward algorithms are essential for machine learning. Furthermore, its distribution features make deploying AI applications across multiple machines easier, meeting the high computational demands of AI systems.

With a rich ecosystem of libraries and tools, Elixir streamlines development, so AI applications are scalable, efficient, and reliable. As AI and machine learning become increasingly vital to business success, creating high-performing solutions will become a key competitive advantage.

Final Thoughts

Embracing machine learning for business is no longer optional for companies that want to remain competitive. Machine learning tools empower businesses to make faster, data-driven decisions, streamline operations, and offer personalised customer experiences. Contact the Erlang Solutions team today if you’d like to discuss building AI systems using Elixir and Erlang or for more insights into implementing machine learning solutions,

The post Why you should consider machine learning for business appeared first on Erlang Solutions.

by Erlang Solutions Team at October 31, 2024 10:30

October 29, 2024

ProcessOne

ejabberd 24.10

ejabberd 24.10

We’re excited to announce ejabberd 24.10, a major release packed with substantial improvements and support for important extensions specified by the XMPP Standard Foundation (XSF). This release represents three months of focused development, bringing around 100 commits to the core repository alongside key updates in dependencies. The improvements span enhanced security and streamlined connectivity—all designed to make ejabberd more powerful and easier to use than ever.

ejabberd 24.10

Release Highlights:

If you are upgrading from a previous version, please note minor changes in commands and two changes in hooks. There are no configuration or SQL schema changes in this release.

Below is a detailed breakdown of the new features, fixes, and enhancements:

Support for XEP-0288: Bidirectional Server-to-Server Connections

The new mod_s2s_bidi module introduces support for XEP-0288: Bidirectional Server-to-Server Connections. This update removes the requirement for two connections per server pair in XMPP federations, allowing for more streamlined inter-server communications. However, for full compatibility, ejabberd can still connect to servers that do not support bidirectional connections, using two connections when necessary. The module is enabled by default in the sample configuration.

Support for XEP-0480: SASL Upgrade Tasks

The new mod_scram_upgrade module implements XEP-0480: SASL Upgrade Tasks. Compatible clients can now automatically upgrade encrypted passwords to more secure formats, enhancing security with minimal user intervention.

PubSub Service Improvements

We’ve implemented six noteworthy fixes to improve PubSub functionality:

  • PEP notifications are sent only to owners when +notify (3469a51)
  • Non-delivery errors for locally generated notifications are now skipped (d4b3095)
  • Fix default node config parsing (b439929)
  • Fix merging of default node options (ca54f81)
  • Fix choice of node config defaults (a9583b4)
  • Fall back to default plugin options (36187e0)

IQ permission for privileged entities

The mod_privilege module now supports IQ permission based on version 0.4 of XEP-0356: Privileged Entity. See #3889 for details. This feature is especially useful for XMPP gateways using the Slidge library.

WebAdmin improvements

ejabberd 24.06 release laid the foundation for a more streamlined WebAdmin interface, reusing existing commands instead of using specific code, with a possibly different logic. This major change allows developers to add new pages very fast, just by calling existing commands. It also allows administrators to use the same commands than in ejabberdctl or any other command frontend.

As a result, many new pages and content were added. Building on that, the 24.10 update introduces MAM (Message Archive Management) support, allowing administrators to view message counts, remove all MAM messages, or only for a specific contact, and also view the MAM Archive directly from WebAdmin.

ejabberd 24.10

Additionally, WebAdmin now hides pages related to modules that are disabled, preventing unnecessary options from displaying. This affects mod_last, mod_mam, mod_offline, mod_privacy, mod_private, mod_roster, mod_vcard.

Fixes in commands

  • set_presence: Now returns an error when the session is not found.

  • send_direct_invitation: Improved handling of malformed JIDs.

  • update: Fix command output. So far, ejabberd_update:update/0 returned the return value of release_handler_1:eval_script/1. That function returns the list of updated but unpurged modules, i.e., modules where one or more processes are still running an old version of the code. Since commit 5a34020d23f455f80a144bcb0d8ee94770c0dbb1, the ejabberd update command assumes that value to be the list of updated modules instead. As that seems more useful, modify ejabberd_update:update/0 accordingly. This fixes the update command output.

  • get_mam_count: New command to retrieve the number of archived messages for a specific account.

Changes in hooks

Two key changes in hooks:

  • New check_register_user hook in ejabberd_auth.erl to allow blocking account registration when a tombstone exists.

  • Modified room_destroyed hook in mod_muc_room.erl. Until now the hook passed as arguments: LServer, Room, Host. Now it passes: LServer, Room, Host, Persistent That new Persistent argument passes the room persistent option, required by mod_tombstones because only persistent rooms should generate a tombstone, temporary ones should not. And the persistent option should not be completely overwritten, as we must still known its real value even when room is being destroyed.

Log Erlang/OTP and Elixir versions

During server start, ejabberd now shows in the log not only its version number, but also the Erlang/OTP and Elixir versions being used. This will help the administrator to determine what software versions are being used, which is specially useful when investigating some problem, and explaining it to other people for help.

The ejabberd.log file now looks like this:

...
2024-10-22 13:47:05.424 [info] Creating Mnesia disc_only table &aposoauth_token&apos
2024-10-22 13:47:05.427 [info] Creating Mnesia disc table &aposoauth_client&apos
2024-10-22 13:47:05.455 [info] Waiting for Mnesia synchronization to complete
2024-10-22 13:47:05.591 [info] ejabberd 24.10 is started in the node :ejabberd@localhost in 1.93s
2024-10-22 13:47:05.606 [info] Elixir 1.16.3 (compiled with Erlang/OTP 26)
2024-10-22 13:47:05.606 [info] Erlang/OTP 26 [erts-14.2.5.4] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [jit:ns]

2024-10-22 13:47:05.608 [info] Start accepting TCP connections at 127.0.0.1:7777 for :mod_proxy65_stream
2024-10-22 13:47:05.608 [info] Start accepting UDP connections at [::]:3478 for :ejabberd_stun
2024-10-22 13:47:05.608 [info] Start accepting TCP connections at [::]:1883 for :mod_mqtt
2024-10-22 13:47:05.608 [info] Start accepting TCP connections at [::]:5280 for :ejabberd_http
...

Brand new ProcessOne and ejabberd web sites

We’re excited to unveil the redesigned ProcessOne website, crafted to better showcase our expertise in large-scale messaging across XMPP, MQTT, Matrix, and more. This update highlights our core mission of delivering scalable, reliable messaging solutions, with a fresh layout and streamlined structure that reflect our cutting-edge work in the field.

You now get a cleaner ejabberd page, offering quick access to important URLs for downloads, blog posts, and documentation.

Behind the scenes, we’ve transitioned from WordPress to Ghost, a move inspired by its efficient, user-friendly authoring tools and long-term maintainability. All previous blog content has been preserved, and with this new setup, we’re poised to deliver more frequent updates on messaging, XMPP, ejabberd, and related topics.

We welcome your feedback—join us on our new site to share your thoughts, or let us know about any issue or broken link!

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker...

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get MUC support in mod_unread.

ejabberd keeps a counter of unread messages per conversation using the mod_unread module. This now also works in MUC rooms: each user can retrieve the number of unread messages in each of their rooms.

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Miscelanea

  • ejabberd_c2s: Optionally allow unencrypted SASL2
  • ejabberd_system_monitor: Handle call by gen_event:swap_handler (#4233)
  • ejabberd_http_ws: Remove support for old websocket connection protocol
  • ejabberd_stun: Omit auth_realm log message
  • ext_mod: Handle info message when contrib module transfers table ownership
  • mod_block_strangers: Add feature announcement to disco-info (#4039)
  • mod_mam: Advertise XEP-0424 feature in server disco-info (#3340)
  • mod_muc_admin: Better handling of malformed jids in send_direct_invitation command
  • mod_muc_rtbl: Fix call to gen_server:stop (#4260)
  • mod_privilege: Support "IQ permission" from XEP-0356 0.4.1 (#3889)
  • mod_pubsub: Don&apost blindly echo PEP notification
  • mod_pubsub: Skip non-delivery errors for local pubsub generated notifications
  • mod_pubsub: Fall back to default plugin options
  • mod_pubsub: Fix choice of node config defaults
  • mod_pubsub: Fix merging of default node options
  • mod_pubsub: Fix default node config parsing
  • mod_register: Support to block IPs in a vhost using append_host_config (#4038)
  • mod_s2s_bidi: Add support for S2S Bidirectional
  • mod_scram_upgrade: Add support for SCRAM upgrade tasks
  • mod_vcard: Return error stanza when storage doesn&apost support vcard update (#4266)
  • mod_vcard: Return explicit error stanza when user attempts to modify other&aposs vcard
  • Minor improvements to support mod_tombstones (#2456)
  • Update fast_xml to use use_maps and remove obsolete elixir files
  • Update fast_tls and xmpp to improve s2s fallback for invalid direct tls connections
  • make-binaries: Bump dependency versions: Elixir 1.17.2, OpenSSL 3.3.2, ...

Administration

  • ejabberdctl: If ERLANG_NODE lacks host, add hostname (#4288)
  • ejabberd_app: At server start, log Erlang and Elixir versions
  • MySQL: Fix column type in the schema update of archive table in schema update

Commands API

  • get_mam_count: New command to get number of archived messages for an account
  • set_presence: Return error when session not found
  • update: Fix command output
  • Add mam and offline tags to the related purge commands

Code Quality

  • Fix warnings about unused macro definitions reported by Erlang LS
  • Fix Elvis report: Fix dollar space syntax
  • Fix Elvis report: Remove spaces in weird places
  • Fix Elvis report: Don&apost use ignored variables
  • Fix Elvis report: Remove trailing whitespace characters
  • Define the types of options that opt_type.sh cannot derive automatically
  • ejabberd_http_ws: Fix dialyzer warnings
  • mod_matrix_gw: Remove useless option persist
  • mod_privilege: Replace try...catch with a clean alternative

Development Help

  • elvis.config: Fix file syntax, set vim mode, disable many tests
  • erlang_ls.config: Let it find paths, update to Erlang 26, enable crossref
  • hooks_deps: Hide false-positive warnings about gen_mod
  • Makefile: Add support for make elvis when using rebar3
  • .vscode/launch.json: Experimental support for debugging with Neovim
  • CI: Add Elvis tests
  • CI: Add XMPP Interop tests
  • Runtime: Cache hex.pm archive from rebar3 and mix

Documentation

  • Add links in top-level options documentation to their Docs website sections
  • Document which SQL servers can really use update_sql_schema
  • Improve documentation of ldap_servers and ldap_backups options (#3977)
  • mod_register: Document behavior when access is set to none (#4078)

Elixir

  • Handle case when elixir support is enabled but not available
  • Start ExSync manually to ensure it&aposs started if (and only if) Relive
  • mix.exs: Fix mix release error: logger being regular and included application (#4265)
  • mix.exs: Remove from extra_applications the apps already defined in deps (#4265)

WebAdmin

  • Add links in user page to offline and roster pages
  • Add new "MAM Archive" page to webadmin
  • Improve many pages to handle when modules are disabled
  • mod_admin_extra: Move some webadmin pages to their modules

Full Changelog

https://github.com/processone/ejabberd/compare/24.07...24.10

ejabberd 24.10 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues.

by Jérôme Sautret at October 29, 2024 14:26

October 24, 2024

Erlang Solutions

Implementing Phoenix LiveView: From Concept to Production

When I began working with Phoenix LiveView, the project evolved from a simple backend service into a powerful, UI-driven customer service tool. A basic Phoenix app for storing user data quickly became a core part of our client’s workflow.

In this post, I’ll take you through a project that grew from its original purpose- from a service for storing and serving user data to a LiveView-powered application that is now a key tool in the client’s organisation for customer service.

Why We Chose Phoenix LiveView

Our initial goal was to migrate user data from an external, paid service to a new in-house solution, developed collaboratively by Erlang Solutions (ESL) and the client’s teams.

With millions of users, we needed a simple way to verify migrated data without manually connecting to the container and querying the database every time.

Since the in-house service was a Phoenix application that uses Ecto and Postgres, adding LiveView was the most natural fit.

Implementing Phoenix LiveView: Data Migration and UI Development

After we had established the goal, the next step was to create a database service to store and serve user information to other services, as well as to migrate all existing user data from an external service to the new one.

We chose Phoenix with Ecto and Postgres, as the old database was already connected to a Phoenix application, and the client’s team was well-versed in Elixir and BEAM.

Data Migration Strategy

The ESL and client teams’ strategy began by slowly copying user data from the old service to the new database whenever users logged in. For certain users (e.g., developers), we logged them in and pulled user information only from the new system. We defined a new login session struct (Elixir struct), which we used for pattern matching to determine whether to use the old or new system. The old system was treated as a fallback and the source of truth for user data.

Phoenix LiveView Migration to in-house database

With this strategy, we could develop and test the new database system in parallel with the old one in production, without affecting regular users, and ensured that everything worked as expected.

At the end, we performed a data dump for all users, configuring the service to use the new system as the main source of truth. Since we had tested with a small number of users beforehand, the transition was smooth, and users had no idea anything had changed from their end. Response times were cut in half compared to the previous solution!

The Evolution of LiveView Application

The addition of LiveView to the application was first thought of when the ESL team together with the client team wanted to check the test migration data. The team wanted to be able to cross reference immediately if the user data has been inserted or updated as intended in our new service. It was complicated and cumbersome at first as we had to connect to the application remotely and do a manual query or call an internal function from a remote Elixir shell.

Phoenix LiveVie: Evolution of LiveView Application

Initially, LiveView was developed solely for the team. We started with a simple table listing users, then added search functionality for IDs or emails, followed by pagination as the test data grew. With the simple UI using LiveView in place, we started with the data migration process and the UI helped tremendously when we went to verify if the data got migrated correctly, and how many users we have successfully migrated. 

Adoption and Expansion of the LiveView Tool

As we demonstrated the UI to stakeholders, it quickly became the go-to tool for customer service, with new features continuously added based on feedback. The development team received many requests from customer service and other managers in the client’s organisation. We fulfilled these requests with features such as searching users by a combination of fields, helping change users’ email addresses, and checking user activity (e.g., when a user’s email was changed or if users suspected they had been hacked).

Later, we connected the LiveView application to sync and display data from another internal service, which contained information about users’ access to the client’s product. The customer service team was able to get a more complete view of the user and could use the same tool to grant or sync user access without switching to other systems.

The best aspect of using Phoenix LiveView is that the development team also owned the UI. We determined the data structure, knew what needed to be there, and designed the LiveView page ourselves. This removed the need to rely on another team, and we could reflect changes swiftly in the web views without having to coordinate with external teams.

Challenges and Feedback of Implementing Phonenix LiveView

There were some glitches along the way, and when we asked for feedback from the customer service team, we found several UX aspects that could be improved. For example, data didn’t always update immediately, or buttons occasionally failed to work properly. However, these issues also indicated that the Phoenix LiveView application was used heavily by the team, emphasising the need for improvements to support better workflows.

While our LiveView implementation worked well, it wasn’t without imperfections. Most of our development team lacked extensive web development experience, so there were several aspects we either overlooked or didn’t fully consider. Some team members had limited knowledge of web technologies like Tailwind and CSS/HTML, which helped guide us, but we realised that for a more polished user experience (UX) and smoother interface, basic HTML/CSS skills alone wouldn’t be sufficient to create an optimal LiveView application.

Another challenge was infrastructure. Since our service was read-heavy, we used AWS RDS reader instances to maximise performance, but this led to occasional replication delays. These delays could cause mismatches when customer service updated data and LiveView reloaded the page before the updates had replicated to the reader instances. We had to carefully consider when it was appropriate to use the reader instances and adjust our approach accordingly.

Team Dynamics and Collaboration

Mob programming way of working was also one of the factors that led to the success of this project.  Our team consists of members with different expertise. By working together, we can discuss and share our experiences while programming together, instead of having to explain later in code review or knowledge sharing what each of us has implemented and why. For example, we guided a member who had more experience in Erlang/OTP through creating a form with Liveview, which needed more experience in Ecto and Phoenix. That member could then explain and guide others with OTP-related implementation in our services. 

Mob programming helped our team focus on one large task at a time. This collaborative approach ensured a consistent codebase with unified conventions, leading to efficient feature implementation.

Conclusion

What began as a simple backend project with Phoenix and Ecto evolved into a key tool for customer service, driven by the power of Phoenix LiveView. The Admin page, initially unplanned, became an integral part of the client’s workflow, proving the vast potential of LiveView and Elixir.

Though we encountered challenges, LiveView’s real-time interactivity, seamless integration, and developer control over both the backend and UI were invaluable. We believe we’ve only scratched the surface of what developers can achieve with LiveView.

Want to learn more about LiveView? Check out this article. If you’re exploring Phoenix LiveView for your project, feel free to reach out—we’d love to share our experience and help you unlock its full potential.

The post Implementing Phoenix LiveView: From Concept to Production appeared first on Erlang Solutions.

by Phuong Van at October 24, 2024 09:22

October 22, 2024

ProcessOne

ProcessOne Unveils New Website

We’re excited to announce the relaunch of our website, designed to better showcase our expertise in large-scale messaging solutions, highlighting our full spectrum of supported protocols—from XMPP to MQTT and Matrix. This reflects our core strength: delivering reliable messaging at scale.

The last major redesign was back in October 2017, so this update was long overdue. As we say farewell to the old design, here’s a screenshot of the previous version to commemorate the journey so far.

alt

In addition to refreshing the layout and structure, we’ve made a significant change under the hood by migrating from WordPress to Ghost. After using Ghost for my personal blog and being thoroughly impressed, we knew it was the right choice for ProcessOne. The new platform offers not only long-term maintainability but also a much more streamlined, enjoyable day-to-day experience, thanks to its faster and more efficient authoring tools.

All of our previous blog content has been successfully migrated, and we’re now in a great position to deliver more frequent updates on topics such as messaging, XMPP, ejabberd, MQTT, and Matrix. Stay tuned for exciting new posts!

We’d love to hear your feedback and suggestions on what topics you’d like us to cover next. To join the conversation, simply create an account on our site and share your thoughts.

by Mickaël Rémond at October 22, 2024 14:05

ProcessOne Unveils New Website

We’re excited to announce the relaunch of our website, designed to better showcase our expertise in large-scale messaging solutions, highlighting our full spectrum of supported protocols—from XMPP to MQTT and Matrix. This reflects our core strength: delivering reliable messaging at scale.

The last major redesign was back in October 2017, so this update was long overdue. As we say farewell to the old design, here’s a screenshot of the previous version to commemorate the journey so far.

alt

In addition to refreshing the layout and structure, we’ve made a significant change under the hood by migrating from WordPress to Ghost. After using Ghost for my personal blog and being thoroughly impressed, we knew it was the right choice for ProcessOne. The new platform offers not only long-term maintainability but also a much more streamlined, enjoyable day-to-day experience, thanks to its faster and more efficient authoring tools.

All of our previous blog content has been successfully migrated, and we’re now in a great position to deliver more frequent updates on topics such as messaging, XMPP, ejabberd, MQTT, and Matrix. Stay tuned for exciting new posts!

We’d love to hear your feedback and suggestions on what topics you’d like us to cover next. To join the conversation, simply create an account on our site and share your thoughts.

by Mickaël Rémond at October 22, 2024 14:05