Planet Jabber

July 19, 2024

Gajim

Gajim 1.9.2

Gajim 1.9.2 brings an important OMEMO encryption fix, native notifications on Windows, usability improvements, and many bugfixes. Thank you for all your contributions!

What’s New

For some versions now, Windows offers a native notification system, including a notification center for unread notifications, notification settings, etc. If you are running Windows 10 (specifically build 10240) or later versions, Gajim will now use these native notifications.

Thanks to our contributor @nicoco, notifications for new messages from group chats now show the group chat’s avatar including the sender’s avatar.

Last but not least, an annoying issue with OMEMO encrypted messages has been fixed, where people would have broken sessions after being offline for a while.

This release also comes with many bugfixes. Have a look at the changelog for a complete list.

Gajim

As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab.

July 19, 2024 00:00

July 18, 2024

ProcessOne

ejabberd 24.02

🚀 Introducing ejabberd 24.02: A Huge Release!

ejabberd 24.02 has just been release and well, this is a huge release with 200 commits and more in the libraries. We’ve packed this update with a plethora of new features, significant improvements, and essential bug fixes, all designed to supercharge your messaging infrastructure.


🌐 Matrix Federation Unleashed: Imagine seamlessly connecting with Matrix servers – it’s now possible! ejabberd breaks new ground in cross-platform communication, fostering a more interconnected messaging universe. We have still some ground to cover and for that we are waiting for your feedback.
🔐 Cutting-Edge Security with TLS 1.3 & SASL2: In an era where security is paramount, ejabberd steps up its game. With support for TLS 1.3 and advanced SASL2 protocols, we increase the overall security for all platform users.
🚀 Performance Enhancements with Bind 2: Faster connection times, especially crucial for mobile network users, thanks to Bind 2 and other performance optimizations.
🔄 User gains better control over on their messages: The new support for XEP-0424: Message Retraction allows users to manage their message history and remove something they posted by mistake.
🔧 Optimized server pings by relying on an existing mechanism coming from XEP-0198
📈 Streamlined API Versioning: Our refined API versioning means smoother, more flexible integration for your applications.
🧩 Enhanced Elixir, Mix and Rebar3 Support

If you upgrade ejabberd from a previous release, please review those changes:

A more detailed explanation of those topics and other features:

Matrix federation

ejabberd is now able to federate with Matrix servers. Detailed instructions to setup Matrix federation with ejabberd will be detailed in another post.

Here is a quick summary of the configuration steps:

First, s2s must be enabled on ejabberd. Then define a listener that uses mod_matrix_gw:

listen:
  -
    port: 8448
    module: ejabberd_http
    tls: true
    certfile: "/opt/ejabberd/conf/server.pem"
    request_handlers:
      "/_matrix": mod_matrix_gw

And add mod_matrix_gw in your modules:

modules:
  mod_matrix_gw:
    matrix_domain: "domain.com"
    key_name: "somename"
    key: "yourkeyinbase64"

Support TLS 1.3, Bind 2, SASL2

Support for XEP-0424 Message Retraction

With the new support for XEP-0424: Message Retraction, users of MAM message archiving can control their message archiving, with the ability to ask for deletion.

Support for XEP-0198 pings

If stream management is enabled, let mod_ping trigger XEP-0198 <r/>equests rather than sending XEP-0199 pings. This avoids the overhead of the ping IQ stanzas, which, if stream management is enabled, are accompanied by XEP-0198 elements anyway.

Update the SQL schema

The table archive has a text column named origin_id (see commit 975681). You have two methods to update the SQL schema of your existing database:

If using MySQL or PosgreSQL, you can enable the option update_sql_schema and ejabberd will take care to update the SQL schema when needed: add in your ejabberd configuration file the line update_sql_schema: true

If you are using other database, or prefer to update manually the SQL schema:

  • MySQL default schema:
ALTER TABLE archive ADD COLUMN origin_id varchar(191) NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_username_origin_id USING BTREE ON archive(username(191), origin_id(191));
  • MySQL new schema:
ALTER TABLE archive ADD COLUMN origin_id varchar(191) NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_sh_username_origin_id USING BTREE ON archive(server_host(191), username(191), origin_id(191));
  • PostgreSQL default schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_username_origin_id ON archive USING btree (username, origin_id);
  • PostgreSQL new schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_sh_username_origin_id ON archive USING btree (server_host, username, origin_id);
  • MSSQL default schema:
ALTER TABLE [dbo].[archive] ADD [origin_id] VARCHAR (250) NOT NULL;
CREATE INDEX [archive_username_origin_id] ON [archive] (username, origin_id)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
  • MSSQL new schema:
ALTER TABLE [dbo].[archive] ADD [origin_id] VARCHAR (250) NOT NULL;
CREATE INDEX [archive_sh_username_origin_id] ON [archive] (server_host, username, origin_id)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
  • SQLite default schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
CREATE INDEX i_archive_username_origin_id ON archive (username, origin_id);
  • SQLite new schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
CREATE INDEX i_archive_sh_username_origin_id ON archive (server_host, username, origin_id);

Authentication workaround for Converse.js and Strophe.js

This ejabberd release includes support for XEP-0474: SASL SCRAM Downgrade Protection, and some clients may not support it correctly yet.

If you are using Converse.js 10.1.6 or older, Movim 0.23 Kojima or older, or any other client based in Strophe.js v1.6.2 or older, you may notice that they cannot authenticate correctly to ejabberd.

To solve that problem, either update to newer versions of those programs (if they exist), or you can enable temporarily the option disable_sasl_scram_downgrade_protection in the ejabberd configuration file ejabberd.yml like this:

disable_sasl_scram_downgrade_protection: true

Support for API versioning

Until now, when a new ejabberd release changed some API command (an argument renamed, a result in a different format…), then you had to update your API client to the new API at the same time that you updated ejabberd.

Now the ejabberd API commands can have different versions, by default the most recent one is used, and the API client can specify the API version it supports.

In fact, this feature was implemented seven years ago, included in ejabberd 16.04, documented in ejabberd Docs: API Versioning… but it was never actually used!

This ejabberd release includes many fixes to get API versioning up to date, and it starts being used by several commands.

Let’s say that ejabberd 23.10 implemented API version 0, and this ejabberd 24.02 adds API version 1. You may want to update your API client to use the new API version 1… or you can continue using API version 0 and delay API update a few weeks or months.

To continue using API version 0:
– if using ejabberdctl, use the switch --version 0. For example: ejabberdctl --version 0 get_roster admin localhost
– if using mod_http_api, in ejabberd configuration file add v0 to the request_handlers path. For example: /api/v0: mod_http_api

Check the details in ejabberd Docs: API Versioning.

ejabberd commands API version 1

When you want to update your API client to support ejabberd API version 1, those are the changes to take into account:
– Commands with list arguments
– mod_http_api does not name integer and string results
– ejabberdctl with list arguments
– ejabberdctl list results

All those changes are described in the next sections.

Commands with list arguments

Several commands now use list argument instead of a string with separators (different commands used different separators: ; : \\n ,).

The commands improved in API version 1:
add_rosteritem
oauth_issue_token
send_direct_invitation
srg_create
subscribe_room
subscribe_room_many

For example, srg_create in API version 0 took as arguments:

{"group": "group3",
 "host": "myserver.com",
 "label": "Group3",
 "description": "Third group",
 "display": "group1\\ngroup2"}

now in API version 1 the command expects as arguments:

{"group": "group3",
 "host": "myserver.com",
 "label": "Group3",
 "description": "Third group",
 "display": ["group1", "group2"]}

mod_http_api not named results

There was an incoherence in mod_http_api results when they were integer/string and when they were list/tuple/rescode…: the result contained the name, for example:

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel/v0"
{"levelatom":"info"}

Staring in API version 1, when result is an integer or a string, it will not contain the result name. This is now coherent with the other result formats (list, tuple, …) which don’t contain the result name either.

Some examples with API version 0 and API version 1:

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel/v0"
{"levelatom":"info"}

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel"
"info"

$ curl -k -X POST -H "Content-type: application/json" -d '{"name": "registeredusers"}' "http://localhost:5280/api/stats/v0"
{"stat":2}

$ curl -k -X POST -H "Content-type: application/json" -d '{"name": "registeredusers"}' "http://localhost:5280/api/stats"
2

$ curl -k -X POST -H "Content-type: application/json" -d '{"host": "localhost"}' "http://localhost:5280/api/registered_users/v0"
["admin","user1"]

$ curl -k -X POST -H "Content-type: application/json" -d '{"host": "localhost"}' "http://localhost:5280/api/registered_users"
["admin","user1"]

ejabberdctl with list arguments

ejabberdctl now supports list and tuple arguments, like mod_http_api and ejabberd_xmlrpc. This allows ejabberdctl to execute all the existing commands, even some that were impossible until now like create_room_with_opts and set_vcard2_multi.

List elements are separated with , and tuple elements are separated with :.

Relevant commands:
add_rosteritem
create_room_with_opts
oauth_issue_token
send_direct_invitation
set_vcard2_multi
srg_create
subscribe_room
subscribe_room_many

Some example uses:

ejabberdctl add_rosteritem user1 localhost testuser7 localhost NickUser77l gr1,gr2,gr3 both
ejabberdctl create_room_with_opts room1 conference.localhost localhost public:false,persistent:true
ejabberdctl subscribe_room_many user1@localhost:User1,admin@localhost:Admin room1@conference.localhost urn:xmpp:mucsub:nodes:messages,u

ejabberdctl list results

Until now, ejabberdctl returned list elements separated with ;. Now in API version 1 list elements are separated with ,.

For example, in ejabberd 23.10:

$ ejabberdctl get_roster admin localhost
jan@localhost jan   none    subscribe       group1;group2
tom@localhost tom   none    subscribe       group3

Since this ejabberd release, using API version 1:

$ ejabberdctl get_roster admin localhost
jan@localhost jan   none    subscribe       group1,group2
tom@localhost tom   none    subscribe       group3

it is still possible to get the results in the old syntax, using API version 0:

$ ejabberdctl --version 0 get_roster admin localhost
jan@localhost jan   none    subscribe       group1;group2
tom@localhost tom   none    subscribe       group3

ejabberdctl help improved

ejabberd supports around 200 administrative commands, and probably you consult them in the ejabberd Docs -> API Reference page, where all the commands documentation is perfectly displayed…

The ejabberdctl command-line script already allowed to consult the commands documentation, consulting in real-time your ejabberd server to show you exactly the commands that are available. But it lacked some details about the commands. That has been improved, and now ejabberdctl shows all the information, including arguments description, examples and version notes.

For example, the connected_users_vhost command documentation as seen in the ejabberd Docs site is equivalently visible using ejabberdctl:

$ ejabberdctl help connected_users_vhost
  Command Name: connected_users_vhost

  Arguments: host::binary : Server name

  Result: connected_users_vhost::[ sessions::string ]

  Example: ejabberdctl connected_users_vhost "myexample.com"
           user1@myserver.com/tka
           user2@localhost/tka

  Tags: session

  Module: mod_admin_extra

  Description: Get the list of established sessions in a vhost

Experimental support for Erlang/OTP 27

Erlang/OTP 27.0-rc1 was recently released, and ejabberd can be compiled with it. If you are developing or experimenting with ejabberd, it would be great if you can use Erlang/OTP 27 and report any problems you find. For production servers, it’s recommended to stick with Erlang/OTP 26.2 or any previous version.

In this sense, the rebar and rebar3 binaries included with ejabberd are also updated: now they support from Erlang 24 to Erlang 27. If you want to use older Erlang versions from 20 to 23, there are compatible binaries available in git: rebar from ejabberd 21.12 and rebar3 from ejabberd 21.12.

Of course, if you have rebar or rebar3 already installed in your system, it’s preferable if you use those ones, because probably they will be perfectly compatible with whatever erlang version you have installed.

Installers and ejabberd container image

The binary installers now include the recent and stable Erlang/OTP 26.2.2 and Elixir 1.16.1. Many other dependencies were updated in the installers, the most notable is OpenSSL that has jumped to version 3.2.1.

The ejabberd container image and the ecs container image have gotten all those version updates, and also Alpine is updated to 3.19.

By the way, this container image already had support to run commands when the container starts… And now you can setup the commands to allow them fail, by prepending the character !.

Summary of compilation methods

When compiling ejabberd from source code, you may have noticed there are a lot of possibilities. Let’s take an overview before digging in the new improvements:

  • Tools to manage the dependencies and compilation:
    • Rebar: it is nowadays very obsolete, but still does the job of compiling ejabberd
    • Rebar3: the successor of Rebar, with many improvements and plugins, supports hex.pm and Elixir compilation
    • Mix: included with the Elixir programming language, supports hex.pm, and erlang compilation
  • Installation methods:
    • make install: copies the files to the system
    • make prod: prepares a self-contained OTP production release in _build/prod/, and generates a tar.gz file. This was previously named make rel
    • make dev: prepares quickly an OTP development release in _build/dev/
    • make relive: prepares the barely minimum in _build/relive/ to run ejabberd and starts it
  • Start scripts and alternatives:
    • ejabberdctl with erlang shell: start/foreground/live
    • ejabberdctl with elixir shell: iexlive
    • ejabberd console/start (this script is generated by rebar3 or mix, and does not support ejabberdctl configurable options)

For example:
– the CI dynamic tests use rebar3, and Runtime tries to test all the possible combinations
– ejabberd binary installers are built using: mix + make prod
container images are built using: mix + make prod too, and started with ejabberdctl foreground

Several combinations didn’t work correctly until now and have been fixed, for example:
mix + make relive
mix + make prod/dev + ejabberdctl iexlive
mix + make install + ejabberdctl start/foregorund/live
make uninstall buggy has an experimental alternative: make uninstall-rel
rebar + make prod with Erlang 26

Use Mix or Rebar3 by default instead of Rebar to compile ejabberd

ejabberd uses Rebar to manage dependencies and compilation since ejabberd 13.10 4d8f770. However, that tool is obsolete and unmaintained since years ago, because there is a complete replacement:

Rebar3 is supported by ejabberd since 20.12 0fc1aea. Among other benefits, this allows to download dependencies from hex.pm and cache them in your system instead of downloading them from git every time, and allows to compile Elixir files and Elixir dependencies.

In fact, ejabberd can be compiled using mix (a tool included with the Elixir programming language) since ejabberd 15.04 ea8db99 (with improvements in ejabberd 21.07 4c5641a)

For those reasons, the tool selection performed by ./configure will now be:
– If --with-rebar=rebar3 but Rebar3 not found installed in the system, use the rebar3 binary included with ejabberd
– Use the program specified in option: --with-rebar=/path/to/bin
– If none is specified, use the system mix
– If Elixir not found, use the system rebar3
– If Rebar3 not found, use the rebar3 binary included with ejabberd

Removed Elixir support in Rebar

Support for Elixir 1.1 was added as a dependency in commit 01e1f67 to ejabberd 15.02. This allowed to compile Elixir files. But since Elixir 1.4.5 (released Jun 22, 2017) it isn’t possible to get Elixir as a dependency… it’s nowadays a standalone program. For that reason, support to download old Elixir 1.4.4 as a dependency has been removed.

When Elixir support is required, better simply install Elixir and use mix as build tool:

./configure --with-rebar=mix

Or install Elixir and use the experimental Rebar3 support to compile Elixir files and dependencies:

./configure --with-rebar=rebar3 --enable-elixir

Added Elixir support in Rebar3

It is now possible to compile ejabberd using Rebar3 and support Elixir compilation. This compiles the Elixir files included in ejabberd’s lib/ path. There’s also support to get dependencies written in Elixir, and it’s possible to build OTP releases including Elixir support.

It is necessary to have Elixir installed in the system, and configure the compilation using --enable-elixir. For example:

apt-get install erlang erlang-dev elixir
git clone https://github.com/processone/ejabberd.git ejabberd
cd ejabberd
./autogen.sh
./configure --with-rebar=rebar3 --enable-elixir
make
make dev
_build/dev/rel/ejabberd/bin/ejabberdctl iexlive

Elixir versions supported

Elixir 1.10.3 is the minimum supported, but:
– Elixir 1.10.3 or higher is required to build an OTP release with make prod or make dev
– Elixir 1.11.4 or higher is required to build an OTP release if using Erlang/OTP 24 or higher
– Elixir 1.11.0 or higher is required to use make relive
– Elixir 1.13.4 with Erlang/OTP 23.0 are the lowest versions tested by Runtime

For all those reasons, if you want to use Elixir, it is highly recommended to use Elixir 1.13.4 or higher with Erlang/OTP 23.0 or higher.

make rel is renamed to make prod

When ejabberd started to use Rebar2 build tool, that tool could create an OTP release, and the target in Makefile.in was conveniently named make rel.

However, newer tools like Rebar3 and Elixir’s Mix support creating different types of releases: production, development, … In this sense, our make rel target is nowadays more properly named make prod.

For backwards compatibility, make rel redirects to make prod.

New make install-rel and make uninstall-rel

This is an alternative method to install ejabberd in the system, based in the OTP release process. It should produce exactly the same results than the existing make install.

The benefits of make install-rel over the existing method:
– this uses OTP release code from rebar/rebar3/mix, and consequently requires less code in our Makefile.in
make uninstall-rel correctly deletes all the library files

This is still experimental, and it would be great if you are able to test it and report any problem; eventually this method could replace the existing one.

Just for curiosity:
– ejabberd 13.03-beta1 got support for make uninstall was added
ejabberd 13.10 introduced Rebar build tool and code got more modular
– ejabberd 15.10 started to use the OTP directory structure for ‘make install’, and this broke make uninstall

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker…

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get:

Push

  • Fix clock issue when signing Apple push JWT tokens
  • Share Apple push JWT tokens between nodes in cluster
  • Increase allowed certificates chain depth in GCM requests
  • Use x:oob data as source for image delivered in pushes
  • Process only https urls in oob as images in pushes
  • Fix jid in disable push iq generated by GCM and Webhook service
  • Add better logging for TooManyProviderTokenUpdated error
  • Make get_push_logs command generate better error if mod_push_logger not available
  • Add command get_push_logs that can be used to retrieve info about recent pushes and errors reported by push services
  • Add support for webpush protocol for sending pushes to safari/chrome/firefox browsers

MAM

  • Expand mod_mam_http_access API to also accept range of messages

MUC

  • Update mod_muc_state_query to fix subject_author room state field
  • Fix encoding of config xdata in mod_muc_state_query

PubSub

  • Allow pubsub node owner to overwrite items published by other persons (p1db)

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Core

  • Added Matrix gateway in mod_matrix_gw
  • Support SASL2 and Bind2
  • Support tls-server-end-point channel binding and sasl2 codec
  • Support tls-exporter channel binding
  • Support XEP-0474: SASL SCRAM Downgrade Protection
  • Fix presenting features and returning results of inline bind2 elements
  • disable_sasl_scram_downgrade_protection: New option to disable XEP-0474
  • negotiation_timeout: Increase default value from 30s to 2m
  • mod_carboncopy: Teach how to interact with bind2 inline requests

Other

  • ejabberdctl: Fix startup problem when having set EJABBERD_OPTS and logger options
  • ejabberdctl: Set EJABBERD_OPTS back to "", and use previous flags as example
  • eldap: Change logic for eldap tls_verify=soft and false
  • eldap: Don’t set fail_if_no_peer_cert for eldap ssl client connections
  • Ignore hints when checking for chat states
  • mod_mam: Support XEP-0424 Message Retraction
  • mod_mam: Fix XEP-0425: Message Moderation with SQL storage
  • mod_ping: Support XEP-0198 pings when stream management is enabled
  • mod_pubsub: Normalize pubsub max_items node options on read
  • mod_pubsub: PEP nodetree: Fix reversed logic in node fixup function
  • mod_pubsub: Only care about PEP bookmarks options when creating node from scratch

SQL

  • MySQL: Support sha256_password auth plugin
  • ejabberd_sql_schema: Use the first unique index as a primary key
  • Update SQL schema files for MAM’s XEP-0424
  • New option sql_flags: right now only useful to enable mysql_alternative_upsert

Installers and Container

  • Container: Add ability to ignore failures in execution of CTL_ON_* commands
  • Container: Update to Erlang/OTP 26.2, Elixir 1.16.1 and Alpine 3.19
  • Container: Update this custom ejabberdctl to match the main one
  • make-binaries: Bump OpenSSL 3.2.1, Erlang/OTP 26.2.2, Elixir 1.16.1
  • make-binaries: Bump many dependency versions

Commands API

  • print_sql_schema: New command available in ejabberdctl command-line script
  • ejabberdctl: Rework temporary node name generation
  • ejabberdctl: Print argument description, examples and note in help
  • ejabberdctl: Document exclusive ejabberdctl commands like all the others
  • Commands: Add a new muc_sub tag to all the relevant commands
  • Commands: Improve syntax of many commands documentation
  • Commands: Use list arguments in many commands that used separators
  • Commands: set_presence: switch priority argument from string to integer
  • ejabberd_commands: Add the command API version as a tag vX
  • ejabberd_ctl: Add support for list and tuple arguments
  • ejabberd_xmlrpc: Fix support for restuple error response
  • mod_http_api: When no specific API version is requested, use the latest

Compilation with Rebar3/Elixir/Mix

  • Fix compilation with Erlang/OTP 27: don’t use the reserved word ‘maybe’
  • configure: Fix explanation of --enable-group option (#4135)
  • Add observer and runtime_tools in releases when --enable-tools
  • Update “make translations” to reduce build requirements
  • Use Luerl 1.0 for Erlang 20, 1.1.1 for 21-26, and temporary fork for 27
  • Makefile: Add install-rel and uninstall-rel
  • Makefile: Rename make rel to make prod
  • Makefile: Update make edoc to use ExDoc, requires mix
  • Makefile: No need to use escript to run rebar|rebar3|mix
  • configure: If --with-rebar=rebar3 but rebar3 not system-installed, use local one
  • configure: Use Mix or Rebar3 by default instead of Rebar2 to compile ejabberd
  • ejabberdctl: Detect problem running iex or etop and show explanation
  • Rebar3: Include Elixir files when making a release
  • Rebar3: Workaround to fix protocol consolidation
  • Rebar3: Add support to compile Elixir dependencies
  • Rebar3: Compile explicitly our Elixir files when --enable-elixir
  • Rebar3: Provide proper path to iex
  • Rebar/Rebar3: Update binaries to work with Erlang/OTP 24-27
  • Rebar/Rebar3: Remove Elixir as a rebar dependency
  • Rebar3/Mix: If dev profile/environment, enable tools automatically
  • Elixir: Fix compiling ejabberd as a dependency (#4128)
  • Elixir: Fix ejabberdctl start/live when installed
  • Elixir: Fix: FORMATTER ERROR: bad return value (#4087)
  • Elixir: Fix: Couldn’t find file Elixir Hex API
  • Mix: Enable stun by default when vars.config not found
  • Mix: New option vars_config_path to set path to vars.config (#4128)
  • Mix: Fix ejabberdctl iexlive problem locating iex in an OTP release

Full Changelog

https://github.com/processone/ejabberd/compare/23.10…24.02

ejabberd 24.02 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 24.02 first appeared on ProcessOne.

by Jérôme Sautret at July 18, 2024 15:55

ejabberd 24.07

🚀 Introducing ejabberd 24.07: Bugfix Release

This ejabberd 24.07 is mostly a bugfix release for the recent 24.06, and also includes a few improvements.

ejabberd 24.07

If you upgrade ejabberd from a previous release, please check the WebAdmin Config Changes.

A more detailed explanation of those topics and other features:

WebAdmin API permissions configuration

The ejabberd 24.06 release notes announced the Improved WebAdmin with commands usage, and mentioned some api_permissions configuration details, but it was not explicit enough about this fact: with the default ejabberd configuration, an admin was allowed to log in to WebAdmin from any machine, but was only allowed to run commands from the loopback IP address! The WebAdmin showed the page sections, but they were all empty. In addition, there was a bug that showed similar symptoms when entering the WebAdmin from one host and then logging in as an account in another host. Both problems and their solutions are described in #4249.

Please update your configuration accordingly, adding permission from web admin to execute all commands to accounts logged in with admin privilege:

api_permissions:
  "webadmin commands":
    from: ejabberd_web_admin
    who: admin
    what: "*"

Of course you can customize that access as much as you want: only from specific IP addresses, only to certain accounts, only for specific commands…

New option update_sql_schema_timeout

The new option update_sql_schema_timeout allows the schema update process to use longer timeouts. The default value is set to 5 minutes.

This also makes batch of schema updates to single table use transaction. This should help in not leaving table in inconsistent state if some update steps fail (unless you use MySQL where you can’t rollback changes to table schemas).

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker…

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Core

  • ejabberd_options: Add trailing @ to @VERSION@ parsing
  • mod_http_api: Fix problem parsing tuples when using OTP 27 json library (#4242)
  • mod_http_api: Restore args conversion of {"k":"v"} to tuple lists
  • mod_matrix_gw: Add misc:json_encode_With_kv_lists and use it in matrix sign function
  • mod_muc: Output muc#roominfo_avatarhash in room disco info as per updated XEP-0486 (#4234)
  • mod_muc: Improve cross version handling of muc retractions
  • node_pep: Add missing feature item-ids to node_pep
  • mod_register: Send welcome message as chat too (#4246)
  • ejabberd_hooks: Support for ejabberd hook subscribers, useful for mod_prometheus
  • ejabberd.app: Don’t add iex to included_applications
  • make-installers: Fix path in scripts in regular user install (#4258)
  • Test: New tests for API commands

Documentation

  • mod_matrix_gw: Fix matrix_id_as_jid option documentation
  • mod_register: Add example configuration of welcome_message option
  • mix.exs: Add ejabberd example config files to the hex package
  • Update CODE_OF_CONDUCT.md

ext_mod

  • Fetch dependencies from hex.pm when mix is available
  • files_to_path is deprecated, use compile_to_path
  • Compile all Elixir files in a library with one function call
  • Improve error result when problem compiling elixir file
  • Handle case when contrib module has no *.ex and no *.erl
  • mix.exs: Include Elixir’s Logger in the OTP release, useful for mod_libcluster

Logs

  • Print message when starting ejabberd application fails
  • Use error_logger when printing startup failure message
  • Use proper format depending on the formatter (#4256)

SQL

  • Add option update_sql_schema_timeout to allow schema update use longer timeouts
  • Add ability to specify custom timeout for sql operations
  • Allow to configure number of restart in sql_transaction()
  • Make sql query in testsuite compatible with pg9.1
  • In mysql.sql, fix update instructions for the archive table, origin_id column (#4259)

WebAdmin

  • ejabberd.yml.example: Add api_permissions group for webadmin (#4249)
  • Don’t use host from url in webadmin, prefer host used for authentication
  • Fix number of accounts shown in the online-users page
  • Fix crash when viewing old shared roster groups (#4245)
  • Support groupid with spaces when making shared roster result (#4245)

Full Changelog

https://github.com/processone/ejabberd/compare/24.06…24.07

ejabberd 24.07 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 24.07 first appeared on ProcessOne.

by Jérôme Sautret at July 18, 2024 15:50

Erlang Solutions

Meet the team: Nico Gerpe

Welcome to our first-ever “Meet the Team” series!  In this first edition, we’ll be shining the spotlight on Nico Gerpe, the Business Unit Lead for the Americas team at Erlang Solutions. 

Nico discusses his role at Erlang Solutions, his latest explorations in the IoT and machine learning space and most importantly- fun Argentinian summer traditions!

Meet the Team Nico Gerpe

About Nico

What is your role at Erlang Solutions and where are you based?
My role is Business Unit Lead for the Americas team and I am based out of Buenos Aires, Argentina.

What are the current priorities for the Americas team? Are there any specific markets you are focusing on?
​​​Our priorities are growing our practices related to IoT, ML and DevOps on top of our main consultancy area of business. Our services and products benefit customers the most when they have already begun their digital journey and face challenges with their current tech stack. We help them overcome these obstacles and support their growth.

Which specific areas of IoT and machine learning are you planning to explore?
With regards to IoT, we are quite flexible and knowledgeable in managing multiple distributed devices, securing connections and handling vast areas of intercommunication. When it comes to machine learning, we are focusing on data processing/learning and computer vision.

Have you noticed any recent changes in the IoT and machine learning markets that have caught your attention?
Machine learning is evolving beyond training models. It is a way to consolidate and share data from multiple sources, creating significant value. What captured my attention was that it naturally gave me the solution as to how our IoT and ML practices were connected, to provide a comprehensive solution to our customers.

Do you have any fun plans for summer?
Going to the beach for at least a week.

Are there any fun summer Argentinian traditions during summertime?
A commonly accepted plan for any given Sunday is to gather at a house and light up a barbecue while enjoying the swimming pool. Eat late lunch, around 2 pm ish, for about 2 hours starting with picada, which is a mix of sausages, fries, pickles and white bread. Then a proper lunch consisting of barbecued cow, pork and/or chicken meat with salad options. After eating for almost two hours, we have a round of coffee and go back to the swimming pool. To sit back again and eat cake, candies, cookies, etc for another hour or so.

Final thoughts

That’s a wrap for our chat with Nico! 

The Americas team at Erlang Solutions is diving into some exciting areas of IoT and machine learning, helping businesses tackle their tech challenges and grow. 

From managing distributed devices to advancing in computer vision, there’s a lot to look forward to. Stay tuned for more fun and insightful conversations with our amazing team in the next editions of our “Meet the Team” series.

If you’d like to chat about anything IoT, drop the team a line.

The post Meet the team: Nico Gerpe appeared first on Erlang Solutions.

by Erlang Solutions Team at July 18, 2024 11:06

July 16, 2024

ProcessOne

ejabberd 24.06

🚀 Introducing ejabberd 24.06: Deep Work Release!

This new ejabberd 24.06 includes four months of work, close to 200 commits, including several minor improvements in the core ejabberd, and a lot of improvements in the administrative parts of ejabberd, like the WebAdmin and new API commands.

Brief summary

  • Webadmin rework
  • Improved documentation
  • Architecture and API improvements

If you upgrade ejabberd from a previous release, please review those changes:

A more detailed explanation of those topics and other features:

Support for Erlang/OTP 27 and Elixir 1.17

ejabberd support for Erlang/OTP 27.0 has been improved. In this sense, when using Erlang/OTP 27, the jiffy dependency is not needed, as an equivalent feature is already included in OTP.

The lowest supported Erlang/OTP version continues being 20.0, and the recommendation is using 26.2, which is in fact the one included in the binary installers and container images.

Regarding Elixir, the new 1.17 works correctly. The lowest Elixir supported version is 1.10.3… but in order to benefit from all the ejabberd features, it is highly recommended to use Elixir 1.13.4 or higher with Erlang/OTP 23.0 or higher.

SQL schema changes

There are no changes in the SQL schemas in this release.

Notice that ejabberd can take care to update your MySQL, PostgreSQL and SQLite database schema if you enable the update_sql_schema toplevel option.

That feature was introduced for beta-testing in ejabberd 23.10 and announced in the blog post Automatic schema update in ejabberd.

Starting in this ejabberd 24.06, the update_sql_schema feature is considered stable and the option is enabled by default!

UNIX Socket Domain

The sql_server top-level option now accepts the path to a unix socket domain, expressed as "unix:/path/to/socket", as long as you are using mysql or pgsql in the option sql_type.

Commands changed in API v2

This ejabberd 24.06 release introduces ejabberd Commands API v2. You can continue using API v1; or if you want to update your API client to use APIv2, those are the commands that changed and you may need to update in your client:

Support for banning an account has been improved in API v2:
ban_account stores the ban information in the account XML private storage, so that command requires mod_private to be enabled
get_ban_details shows information about the account banning, if any.
unban_account performs the reverse operation, getting the account to its previous status.

The result value of those two commands was modified to allow their usage in WebAdmin:
kick_user instead of returning an integer, it returns a restuple.
rooms_empty_destroy instead of returning a list of rooms that were destroyed, it returns a restuple.

As a side note, this command has been improved, but this change doesn’t affect the API:
join_cluster has been improved to work not only with the ejabberdctl command line script, but also with any other command frontend (mod_http_api, ejabberd_xmlrpc, ejabberd_web_admin, …).

New commands

Several new commands have been added, specially useful to generate WebAdmin pages:

Improved WebAdmin with commands usage

WebAdmin screenshot

ejabberd already has around 200 commands to perform many administrative tasks, both to get information about the server and its status, and also to perform operations with side-effects. Those commands have its input and output parameters clearly described, and also documented.

This release includes a set of functions (make_command/2 and /4, make_command_raw_value/3, make_table/2 and /4) to use all those commands to generate HTML content in the ejabberd WebAdmin: instead of writing again erlang code to perform those operations and then write code to format it and display as HTML… let’s have some frontend functions to call the command and generate the HTML content. With that new feature, writing content for WebAdmin is much easier if a command for that task already exists.

In this sense, most of the ejabberd WebAdmin pages have been rewritten to use the new make_command feature, many new pages are added using the existing commands. Also a few commands and pages are added to manage Shared Roster Groups.

WebAdmin screenshot

WebAdmin commands permissions configuration

Most WebAdmin pages use commands to generate the content, and access to those commands can be restricted using the api_permissions toplevel option.

The default ejabberd.yml configuration file already defines "admin access" that allows access from loopback IP address and accounts in the admin ACL to execute all commands except stop and start. So, no changes are required in the default configuration file to use the upgrade WebAdmin pages.

Now ejabberd_web_admin is another valid command frontend that can be specified in the from section. You can define fine-grained restrictions for accounts in WebAdmin, for example:

api_permissions:
  "webadmin commands":
    from:
      - ejabberd_web_admin
    who: admin
    what:
      - "*"
      - "![tag:oauth]"

WebAdmin hook changes

There are several changes in WebAdmin hooks that now provide the whole HTTP request instead of only some of its elements.

You can update your code easily, see:

  • webadmin_page_node: instead of Path, Query and Lang, gets Request
-webadmin_page_node(Acc, Node, Path, Query, Lang) ->
+webadmin_page_node(Acc, Node, #request{path = Path, q = Query, lang = Lang}) ->
  • webadmin_page_hostnode: instead of Path, Query and Lang gets Request
-webadmin_page_hostnode(Acc, Host, Node, Path, Query, Lang) ->
+webadmin_page_hostnode(Acc, Host, Node, #request{path = Path, q = Query, lang = Lang}) ->
  • webadmin_user: instead of just the Lang, gets the whole Request
-webadmin_user(Acc, User, Server, Lang) ->
+webadmin_user(Acc, User, Server, #request{lang = Lang}) ->
  • webadmin_menu_hostuser: new hook added:
+webadmin_menu_hostuser(Acc, Host, Username, Lang) ->
  • webadmin_page_hostuser: new hook added:
+webadmin_page_hostuser(Acc, Host, Username, Request) ->

internal command tag and any argument/result

During the development of the WebAdmin commands feature, it was noticed the necessity to define some commands that will be used by WebAdmin (or maybe also by other ejabberd code), but should NOT be accessed by command frontends (like ejabberdctl, mod_http_api, ejabberd_xmlrpc).

Such commands are identified because they have the internal tag.

Those commands can use any arbitrarily-formatted arguments/results, defined as any in the command.

Experimental make format and indent

If you use Emacs with erlang-mode, Vim with some Erlang indenter, VSCode, … they indent erlang code more or less similarly, but sometimes have some minor differences.

The new make format uses rebar3_format to format and indent files, with those restrictions:

  • Only formats a file if it contains a line with this string, and formats only starting in a line with @format-begin

  • Formatting can be disabled later in the file by adding another line that contains @format-end

  • Furthermore, it is later possible to enable formatting again in the same file, in case there is another piece of the file that should be automatically formatted.

Alternatively, the new make indent indents files using Emacs, it also replaces tabs with blankspaces and removes ending spaces. It can only indent one piece of code per file, the lines that finds between:

%% @indent-begin
...
%% @indent-end

New MUC room logging hooks

mod_muc_room now uses hooks instead of function calls to mod_muc_log, see #4191.

The new hooks available, in case you want to write an ejabberd module that logs MUC room messages:

  • muc_log_check_access_log(Acc, Host, From)
  • muc_log_get_url(Acc, StateData)
  • muc_log_add(Host, Type, Data, RoomJid, Opts)

Support for code automatic update

When running ejabberd in an interactive development shell started using relive, it automatically compiles and reloads the source code when you modify a source code file.

How to use this:

  • Compile ejabberd with Rebar3 (or Mix)
  • Start ejabberd with make relive
  • Edit some ejabberd source code file and save it
  • Sync (or ExSync) will compile and reload it automatically

Rebar3 notes:

  • To ensure Sync doesn’t act on dependencies that would produce many garbage log lines, the src_dirs option is used. However, now it only works if the parent directory is named “ejabberd”

  • Sync requires at least Erlang/OTP 21, which introduced the new try-catch syntax to retrieve the stacktrace

Mix note:

ejabberd Docs now using MkDocs

Several changes in ejabberd source code were done to produce markdown suitable for the new ejabberd Docs site, as announced two months ago: ejabberd Docs now using MkDocs

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker…

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get:

Pushy.me

  • Add support for the Pushy.me notification service for Mobile App

Android Push

  • Add support for the new FCMv1 API for Android Push
  • Improve errors reporting for wrong options in mod_gcm

Apple Push

  • Update support for Apple Push API
  • Add support for p12 certificate in mod_applepush
  • Add tls_verify option to mod_applepush
  • Improve errors reporting for wrong options in mod_applepush

Webpush

  • Properly initialize subject in Webpush

Push

  • Add new API commands setup_push, get_push_setup and delete_push_setup for managing push setup, with support for Apple Push, Android Push, Pushy.me and Webpush/Webhook

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Core

  • econf: Add ability to use additional custom errors when parsing options
  • ejabberd_logger: Reloading configuration will update logger settings
  • gen_mod: Add support to specify a hook global, not vhost-specific
  • mod_configure: Retract Get User Password command to update XEP-0133 1.3.0
  • mod_conversejs: Simplify support for @HOST@ in default_domain option (#4167)
  • mod_mam: Document that XEP-0441 is implemented as well
  • mod_mam: Update support for XEP-0425 version 0.3.0, keep supporting 0.2.1 (#4193)
  • mod_matrix_gw: Fix support for @HOST@ in matrix_domain option (#4167)
  • mod_muc_log: Hide join/leave lines, add method to show them
  • mod_muc_log: Support allowpm introduced in 2bd61ab
  • mod_muc_room: Use ejabberd hooks instead of function calls to mod_muc_log (#4191)
  • mod_private: Cope with bookmark decoding errors
  • mod_vcard_xupdate: Send hash after avatar get set for first time
  • prosody2ejabberd: Handle the approved attribute. As feature isn’t implemented, discard it (#4188)

SQL

  • update_sql_schema: Enable this option by default
  • CI: Don’t load database schema files for mysql and pgsql
  • Support Unix Domain Socket with updated p1_pgsql and p1_mysql (#3716)
  • Fix handling of mqtt_pub table definition from mysql.sql and fix should_update_schema/1 in ejabberd_sql_schema.erl
  • Don’t start sql connection pools for unknown hosts
  • Add update_primary_key command to sql schema updater
  • Fix crash running export2sql when MAM enabled but MUC disabled
  • Improve detection of types in odbc

Commands API

  • New ban commands use private storage to keep ban information (#4201)
  • join_cluster_here: New command to join a remote node into our local cluster
  • Don’t name integer and string results in API examples (#4198)
  • get_user_subscriptions: Fix validation of user field in that command
  • mod_admin_extra: Handle case when mod_private is not enabled (#4201)
  • mod_muc_admin: Improve validation of arguments in several commands

Compile

  • ejabberdctl: Comment ERTS_VSN variable when not used (#4194)
  • ejabberdctl: Fix iexlive after make prod when using Elixir
  • ejabberdctl: If INET_DIST_INTERFACE is IPv6, set required option (#4189)
  • ejabberdctl: Make native dynamic node names work when using fully qualified domain names
  • rebar.config.script: Support relaxed dependency version (#4192)
  • rebar.config: Update deps version to rebar3’s relaxed versioning
  • rebar.lock: Track file, now that rebar3 uses loose dependency versioning
  • configure.ac: When using rebar3, unlock dependencies that are disabled (#4212)
  • configure.ac: When using rebar3 with old Erlang, unlock some dependencies (#4213)
  • mix:exs: Move xmpp from included_applications to applications

Dependencies

  • Base64url: Use only when using rebar2 and Erlang lower than 24
  • Idna: Bump from 6.0.0 to 6.1.1
  • Jiffy: Use Json module when Erlang/OTP 27, jiffy with older ones
  • Jose: Update to the new 1.11.10 for Erlang/OTP higher than 23
  • Luerl: Update to 1.2.0 when OTP same or higher than 20, simplifies commit a09f222
  • P1_acme: Update to support Jose 1.11.10 and Ipv6 support (#4170)
  • P1_acme: Update to use Erlang’s json library instead of jiffy when OTP 27
  • Port_compiler: Update to 1.15.0 that supports Erlang/OTP 27.0

Development Help

  • .gitignore: Ignore ctags/etags files
  • make dialyzer: Add support to run Dialyzer with Mix
  • make format|indent: New targets to format and indent source code
  • make relive: Add Sync tool with Rebar3, ExSync with Mix
  • hook_deps: Use precise name: hooks are added and later deleted, not removed
  • hook_deps: Fix to handle FileNo as tuple {FileNumber, CharacterPosition}
  • Add support to test also EUnit suite
  • Fix code:lib_dir call to work with Erlang/OTP 27.0-rc2
  • Set process flags when Erlang/OTP 27 to help debugging
  • Test retractions in mam_tests

Documentation

  • Add some XEPs support that was forgotten
  • Fix documentation links to new URLs generated by MkDocs
  • Remove ... in example configuration: it is assumed and reduces verbosity
  • Support for version note in modules too
  • Mark toplevel options, commands and modules that changed in latest version
  • Now modules themselves can have version annotations in note

Installers and Container

  • make-binaries: Bump Erlang/OTP to 26.2.5 and Elixir 1.16.3
  • make-binaries: Bump OpenSSL to 3.3.1
  • make-binaries: Bump Linux-PAM to 1.6.1
  • make-binaries: Bump Expat to 2.6.2
  • make-binaries: Revert temporarily an OTP commit that breaks MSSQL (#4178)
  • CONTAINER.md: Invalid CTL_ON_CREATE usage in docker-compose example

WebAdmin

  • ejabberd_ctl: Improve parsing of commas in arguments
  • ejabberd_ctl: Fix output of UTF-8-encoded binaries
  • WebAdmin: Remove webadmin_view for now, as commands allow more fine-grained permissions
  • WebAdmin: Unauthorized response: include some text to direct to the logs
  • WebAdmin: Improve home page
  • WebAdmin: Sort alphabetically the menu items, except the most used ones
  • WebAdmin: New login box in the left menu bar
  • WebAdmin: Add make_command functions to produce HTML command element
  • Document ‘any’ argument and result type, useful for internal commands
  • Commands with ‘internal’ tag: don’t list and block execution by frontends
  • WebAdmin: Move content to commands; new pages; hook changes; new commands

Full Changelog

https://github.com/processone/ejabberd/compare/24.02…24.06

ejabberd 24.06 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 24.06 first appeared on ProcessOne.

by Jérôme Sautret at July 16, 2024 10:23

July 11, 2024

Ignite Realtime Blog

Openfire 4.8.3 Release

The Ignite Realtime community is pleased to announce the release of Openfire 4.8.3. This release contains an important fix for thread lock situation described with OF-2845. If you have noticed clients getting logged out or unable to connect with Openfire 4.8.1 or 4.8.2, please do try this release and report in the community forums if your issue is persisting.

The changelog denotes a few other issues addressed with this release. You can find download artifacts available with the following sha256sum values.

b86bf8c01ede9cb2ae4f43dfd2f49239d9af2d73f650c7c2d52e5a936035e520  openfire-4.8.3-1.noarch.rpm
3f6da6c89ce701d974f6a1afe5ac0245f7112c5d165934eb1a85a749a1f040e2  openfire_4.8.3_all.deb
4fce60210033216556881fd9c988bea3ce30c0ed845f4dec3d4284ee835e8208  openfire_4_8_3.dmg
28b64c144001b0f6fb6eb4705d0bb1a92581774369378196182b8d35237b83be  openfire_4_8_3.exe
43d3b042357a5c975785f3f223490e3dd18b1f499c206be6cd0857172cc005fc  openfire_4_8_3.tar.gz
a09752fbe1226724d466028036fc65d31fe88e60a0efb27a87f1e10ab100fbb1  openfire_4_8_3_x64.exe
5c0638f150ccb61471b4b5152743b6d18cbe008473f454ed0091a13d7b80cb85  openfire_4_8_3.zip

For those of you curious, here are the 4.8.2 artifact download statistics (released 8 days ago)

Variant Filename Downloads
Linux RPM openfire-4.8.2-1.noarch.rpm 195
Debian openfire_4.8.2_all.deb 635
Mac openfire_4_8_2.dmg 56
Windows 32bit openfire_4_8_2.exe 241
Windows 64bit openfire_4_8_2_x64.exe 840
Tarball openfire_4_8_2.tar.gz 135
Zip Archive openfire_4_8_2.zip 72
Total 2,174

Thanks for your interest and usage of Openfire.

For other release announcements and news follow us on Mastodon or X

4 posts - 3 participants

Read full topic

by akrherz at July 11, 2024 16:07

July 06, 2024

The XMPP Standards Foundation

The XMPP Newsletter June 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of June 2024.

XMPP and Google Summer of Code 2024

The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and have kicked-off with coding:

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • XMPP Track at FOSSY: August 1-4th 2024 — Portland State University
  • XMPP Sprint in Berlin: On Friday, 12th to Sunday, 14th of July 2024.
  • Berlin XMPP Meetup[DE / EN]: Monthly meeting of XMPP enthusiasts in Berlin, every 2nd Wednesday of the month
  • XMPP Italian happy hour [IT]: Monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

XMPP Videos

Debian and XMPP in Wind and Solar Measurement talk at MiniDebConf Berlin 2024.

XMPP Articles

XMPP Software News

XMPP Clients and Applications

XMPP Servers

  • Tigase XMPP Server 8.4.0 was released - Most notable features are support for Portable Import/Export Format (XEP-0227), ability to configure users with push devices to show as away, ability to moderate MUCs and support for xmppbl.org.
  • ejabberd 24.06: Deep Work Release! - With four months of work, close to 200 commits, including several minor improvements in the core ejabberd, and a lot of improvements in the administrative parts of ejabberd, like the WebAdmin and new API commands.
    ejabberd WebAdmin interface

    ejabberd WebAdmin interface

XMPP Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • Chat notification settings
    • This document defines an XMPP protocol extension to synchronise per-chat notification settings across different clients.
  • WebXDC
    • This document defines an XMPP protocol extension to communicate WebXDC widgets and their state updates.

New

  • Version 0.1.0 of XEP-0491 (WebXDC)
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 0.2.0 of XEP-0421 (Anonymous unique occupant identifiers for MUCs)
    • Make explicit that one can’t just hash the real JID.
    • Expand security considerations.
    • Add schema.
    • Fix some examples captions and casing (mw)
  • Version 1.1.1 of XEP-0153 (vCard-Based Avatars)
    • XEP-0054 says “Email addresses MUST be contained in a <USERID> element”. (egp)
  • Version 1.2.2 of XEP-0107 (User Mood)
    • Fixed typo (XEP Editor (dg))

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable

  • No XEP moved to Stable this month.

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: Gonzalo Raúl Nemmi

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

To unsubscribe from this list, please log in first. If you have not previously logged in, you may need to set up an account with the appropriate email address.

License

This newsletter is published under CC BY-SA license.

July 06, 2024 00:00

July 04, 2024

ProcessOne

Breaking Down the Costs of Large Messaging Services

I will comment on an interesting article from Meredith Whittaker and Joshua Lund breaking down the cost of running a large-scale messaging platform.

It estimates that the cost to operate Signal messaging will reach 50 million dollars per year in 2025.

🔗 Privacy is Priceless, but Signal is Expensive

As of the end of 2023, the cost breakdown for running a large messaging platform like Signal was as follows:

  • Storage: $1.3 million per year (9.3%). It is interesting to note that Signal does not store message history on the server. Message history is stored on the client.
  • Servers: $2.9 million per year (20.7%). Cost of cloud servers to support the messaging service.
  • Registration Fees: $6 million per year (42.9%). This is the cost required to validate phone numbers and perform other validations.
  • Total Bandwidth: $2.8 million per year (20.0%).
  • Additional Services: $700,000 per year (5.0%). Uptime monitoring, outage alerts, redundant capacity for disaster recovery purposes, maintenance contracts, etc.

Infrastructure Costs (as of November 2023): Approximately $14 million per year, to support between 40 and 50 million monthly active users.

And this is just for the infrastructure costs ! You need to add all the associated costs to operate an organization employing more than 50 people.

This article is interesting on several accounts:

  • If you want to run a messaging platform at scale, we can help you with that, but be sure to properly assess your operating costs as well. Ensure your ambition aligns with your business model.
  • This also shows the impact of the centralized approach vs a federated model. With a centralized model, the organization running the platform has to assume all the costs for operating the platform. In a federated model like XMPP, the costs can be split across the whole network.

The technical design of a messaging service significantly impacts the operational costs of the platform. Therefore, it’s crucial to make business model and technical choices simultaneously and be flexible to adapt to platform growth, as they are two sides of the same coin. Aligning these decisions ensures that both financial sustainability and technical feasibility are achieved.

The post Breaking Down the Costs of Large Messaging Services first appeared on ProcessOne.

by Mickaël Rémond at July 04, 2024 15:53

Erlang Solutions

The Strategic Advantage of Outsourcing with Erlang and Elixir

We’re in the midst of some rapid technological changes (AI, IoT, machine learning etc) and businesses are facing new obstacles. There is now a demand to balance company time and budgets amid all day-to-day responsibilities. Because of this, outsourcing services have become a strategic move for many. 

Let’s look into how Erlang and Elixir programming languages help with business outsourcing. We’ll discuss their expertise in security, scalability and flexibility and how they support business goals.

Understanding the importance of outsourcing

Before we get into its benefits, it’s important to understand the growing importance of outsourcing tech for businesses. Gartner describes outsourcing as a way to “deliver IT-enabled processes, application services, and infrastructure solutions for business outcomes.” 

According to the global Business Process Outsourcing (BPO) market, outsourcing reached $280.64bn in 2023 and is projected to grow 9.6% from 2024 to 2030.

This rise is driven by digital transformation, a focus on cost optimisation, and the increasing demand for specialised services. Companies are focusing more on efficiency and agility for long-term success. Outsourcing provides that much-needed flexibility. They are able to leverage top-tier expertise, without the need for full-time in-house commitment. 

Improved compliance and security

All businesses have sensitive data. To manage this, it must be understood that compliance and security go hand in hand. Compliance is all about adhering to standards and regulations within a given sector. These sectors are external laws (regulatory compliance) and internal policies (corporate compliance). Some of the most common include:

External laws:

  • General Data Protection Regulation (GDPR) – European Union regulation.
  • Payment Card Industry Data Security Standard (PCI DSS) – Global standards for credit card information.
  • Health Insurance Portability and Accountability Act (HIPAA) – U.S. law for health data privacy and security.
  • Sarbanes-Oxley Act (SOX) – U.S. law for financial reporting and auditing standards for public companies.
  • California Consumer Privacy Act (CCPA) – California state law, improving consumer privacy rights.
  • Personal Information Protection Law (PIPL) – Chinese law, regulating the protection of personal information.

Internal policies:

  • Data Security Policies – Guidelines to secure sensitive information within organisations.
  • IT Security Standards – Standards for maintaining the security of IT systems and networks.
  • Access Control Policies – Policies to define access rights to data and systems, based on roles and responsibilities.
  • Data Retention and Disposal Policies – Rules that govern the retention and secure disposal of data.
  • Incident Response Plans – Procedures for responding to and mitigating data breaches or security incidents.

These are all designed to safeguard organisational data and systems from breaches and misuse.

Leveraging functional programming for enhanced security

An important benefit of Erlang and Elixir and functional languages lies in their flexibility. They focus on code clarity and predictability, which is important for providing enhanced security. So even if your business is operating on the most complex of systems, both languages allow for clear and easy testing. This allows developers to identify and rectify any system vulnerabilities effectively.


Another feature of a functional programming language is immutability, meaning it cannot change once created. For businesses, it ensures that data integrity is maintained, preventing any unauthorised changes to the system.

Encryption support

Both languages excel in handling concurrent connections and processes. This is crucial for environments where multiple devices are needed to communicate simultaneously, without performance bottlenecks.

Erlang and Elixir are equipped with powerful libraries that support industry-standard encryption protocols. It allows developers to implement strong encryption mechanisms quickly and effectively. This safeguards precious business data against unauthorised access and breaches.

Their strengths collectively ensure that business devices can communicate securely and reliably. Business data is supported and protected without compromise. To learn more about encryption support, take a look at our five tips to ensure business security.

Cost management 

According to Deloitte, the top reason (70%) for business outsourcing is cost reduction. Outsourcing ensures that you are only paying for the services you need when you need them, leading to cost management and savings. 

Businesses are paying external providers to:

  • Maintain their systems and carry out required tasks 
  • Save them money on upfront costs – especially when starting a new business or project. 
  • Advise on the most cost-effective systems and configure their systems to better suit your needs.

Concerns over training expenses, workspaces, and further equipment are also alleviated, as your team already has the necessary resources to maintain your systems.

Cost-effectiveness in Elixir 

Elixir’s expressive syntax and the high developer productivity of Phoenix and LiveView can deliver more with less code, in less time, and with smaller teams. Its modern language features, such as pattern matching and powerful metaprogramming capabilities, enhance developer efficiency and reduce the complexity of codebases. The concurrency model inherited from Erlang allows for efficient handling of many simultaneous tasks, which can lead to cost savings on infrastructure. Also, its vibrant community and extensive libraries allow for rapid development and quick problem-solving. This is another great pro for businesses as it further reduces development time and associated costs.

Cost-effectiveness in Erlang

Erlang’s lightweight process model allows developers to handle numerous simultaneous operations with minimal resource consumption. This leads to significant infrastructure cost savings. 

The language’s robust error-handling capabilities and “let it crash” philosophy simplify the development of reliable, self-healing systems, reducing the need for extensive debugging and maintenance efforts. Erlang’s mature ecosystem and extensive libraries also enable developers to quickly implement complex features, decreasing development time and allowing smaller teams to achieve more with less. This overall efficiency translates into lower long-term operational and development costs for businesses.

Specialised Erlang and Elixir expertise 

Businesses that outsource their tech stack get value for money in expertise. Outsourcing provides a range of skills and knowledge of a dedicated team.

Expertise is one of the largest benefits of outsourcing your Erlang and Elixir. Erlang in particular has been around for over two decades, so businesses have the added benefit of known reliability and an established reputation.

The rise of Elixir in the programming language space has been impressive, to say the least. It leverages the strengths of the Erlang Virtual Machine (BEAM). So while it hasn’t been around for as long as Erlang, it has inherited the same fault-tolerant, concurrent and distributed computing capabilities.

The expertise of developers alleviates dealing with complex technical issues, resulting in improved operational efficiency, cost savings, and a stronger competitive edge in the market.

Focus on core business activities

Businesses outsource to concentrate on their competencies and strategic initiatives. They are reducing time-consuming tasks, which allows their internal teams to focus on core business activities. 

This enhanced focus can lead to higher productivity, improved quality, and scope for accelerated innovation.

Erlang and Elixir support

Here are just some examples of how Erlang and Elixir support business focus:

Scalability and performance

Both languages are designed to handle levels of concurrency with ease. This allows businesses to seamlessly build and maintain applications, in line with increasing demand. It frees up time to focus on growth instead of worrying about the limitations of infrastructure.

For more on the benefits of scalability, you can check out our post on scalable systems. 

Fault tolerance

Erlang and Elixir have built-in features for fault tolerance, such as supervision trees and isolated processes. This allows applications built within either language to recover almost from failures. It reduces downtime and allows businesses to run smoothly without manual intervention.

Model of a supervision tree 

Rapid development

Elixir in particular shines in this department. Leveraging its productivity on Erlang’s underlying BEAM virtual machine, it provides a much more readable syntax and metaprogramming, allowing for faster development cycles which in turn, enables a faster time to market.

Maintainable code

We have already touched on the functional nature of both Erlang and Elixir. It encourages clear, modular, and maintainable code. This allows businesses to easily update and extend their applications, reducing the risk of bugs and improving the overall long-term maintainability.

Making a case for outsourcing

We’ve shown how utilising Erlang and Elixir can provide a host of strategic advantages for businesses. These languages provide expertise, enhanced security, cost-effective solutions, and scalability that allow businesses to offload day-to-day activities to concentrate on the bigger picture. If you’d like to learn more about how to make the most of your existing Erlang and Elixir tech stack, feel free to drop the team a line.

The post The Strategic Advantage of Outsourcing with Erlang and Elixir appeared first on Erlang Solutions.

by Erlang Solutions Team at July 04, 2024 10:59

July 03, 2024

Ignite Realtime Blog

Openfire 4.8.2 Release

Openfire 4.8.2 has landed!

This release addresses a number of issues in the real time collaboration server created by the Ignite Realtime Community that aim to reduce bugs and increase stability and performance.

Interested in getting started? You can download installers of Openfire here. Our documentation contains an upgrade guide that helps you update from an older version.

sha256sum checksum values for the release artifacts are as follows

4c2674fbf00768cf7ca9ccc9a6ef7e4aa693c19d9885ca469771677934634a40  openfire-4.8.2-1.noarch.rpm
76665dc80607516d12f1c8b7b323417e7993d2f87de2e82deeef43dd6a7d9761  openfire_4.8.2_all.deb
75c513db3c7e50fc5c28a7131aecc0c60ad2f858d7f04a9fe5d58a5de118afec  openfire_4_8_2.dmg
d5af1c2012d092c7c1cd9247db4e4d8039f2617adc9f212d75e549eeca0a389a  openfire_4_8_2.exe
4634e5be6314a5348e5e01413864a8ec6a7b3bbe6e2db1c051512c9bd72a199a  openfire_4_8_2.tar.gz
82c5abdf917b8958311f5813960f3b545266d99d0f646eac9dddbaf0ef52c905  openfire_4_8_2_x64.exe
3327bc610af606a2df28a7077f225a68cf2d04d30a4c37592a5d17f5c22e8c07  openfire_4_8_2.zip

If you have any questions, please stop by our community forum or our live groupchat. We are always looking for volunteers interested in helping out with Openfire development!

For other release announcements and news follow us on Mastodon or X.

2 posts - 2 participants

Read full topic

by guus at July 03, 2024 16:37

June 28, 2024

Ignite Realtime Blog

Botz version 1.3.0 release

We have just released version 1.3.0 of the Botz framework for Openfire (the real-time communications server provided by the Ignite Realtime community)!

The Botz library adds to the already rich and extensible Openfire with the ability to create internal user bots.

In this release, compatbility with Openfire 4.8.0 and later has been resolved. Thank you to Sheldon Robinson for helping us fix that!

Download the latest version of the Botz framework from its project page!

For other release announcements and news follow us on X and Mastodon.

1 post - 1 participant

Read full topic

by guus at June 28, 2024 08:40

Erlang Solutions

Let Your Database Update You with EctoWatch

Elixir allows application developers to create very parallel and very complex systems. Tools like Phoenix PubSub and LiveView thrive on this property of the language, making it very easy to develop functionality that requires continuous updates to users and clients.

But one thing that has often frustrated me is how to cleanly design an application to respond to database record updates. 

A typical pattern that I’ve used is to have a dedicated function which makes a database change (e.g Shipping.insert_event). This function can contain a post-update step which sends out, for example, a PubSub broadcast. But this relies on the team using that function consistently. If there are other update functions (e.g. Shipping.insert_delivery) they also need to do the broadcast.

But the most fool-proof solution would be to have the database update the application whenever there is a change. Not only would this avoid needing to make sure all update functions send out broadcasts, but it also makes sure that the correct actions are taken whenever some external task or application updates the database directly.

While I knew that PostgreSQL had functionality to inform my applications about updates it always seemed intimidating. So I finally decided to figure out how it worked and to make a library! I’d like to introduce EctoWatch which is my attempt to implement this pattern in the simplest way possible.

Why Broadcast Database Updates?

Aside from the obvious case of updating LiveViews, there are a number of things you might want to do in response to record changes:

  • redoing a calculation/cache when source information changes
  • sending out emails about a change
  • sending out webhook requests
  • updating a GraphQL subscription

For example, if you insert a new status event for a tracked package, you may want to:

  • update any webpages/applications currently tracking the package
  • send updates about important events (like the package being delivered)
  • recalculate and update the estimated delivery date

Using EctoWatch

EctoWatch allows you to set up watchers in your application’s supervision tree which can track inserts, updates, and deletes on Ecto schemas which are backed by PostgreSQL tables:

Then processes can subscribe to the broadcasts sent by the watchers:

If your process just needs to get updates about a specific record an ID can be given:

Then finally the module that implements your process (LiveView, GenServer, etc…) can handle messages about records:

You can also define which columns trigger messages on updates as well as which values (in addition to the ID) to send with messages. Definitely check out the repo’s README for more details on how to use EctoWatch!

Conclusion

I believe that EctoWatch can be a powerful new way to simplify how we deal with database changes. Allowing a quick configuration of watchers and using simple message passing with Phoenix PubSub, you can separate the concern of making a change from the concern of what happens as a result of the change. This allows your code to be more easily readable and refactorable.

If you’re in need of help with Elixir development, code and architecture reviews, and more then drop us a line.

The post Let Your Database Update You with EctoWatch appeared first on Erlang Solutions.

by Brian Underwood at June 28, 2024 07:22

June 25, 2024

Ignite Realtime Blog

Openfire restAPI plugin version 1.11.0 release

Earlier today, version 1.11.0 of the REST API plugin for Openfire was released!

The REST API Plugin provides the ability to manage Openfire (the real-time communications server created by the Ignite Realtime community) by sending an REST/HTTP request to the server. This plugin’s functionality is useful for applications that need to administer Openfire outside of the Openfire admin console.

This release mainly addresses compatibility issues with Openfire versions 4.8.0 and later. A big thank you to community member Anckermann for providing the bulk of the fixes!

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page

For other release announcements and news follow us on Mastodon or X

1 post - 1 participant

Read full topic

by guus at June 25, 2024 18:25

June 21, 2024

Gajim

Gajim 1.9.1

Gajim 1.9.1 introduces a menu button, adds improvements for Security Labels, and fixes some bugs. Thank you for all your contributions!

What’s New

Since Gajim 1.9.0, you can toggle Gajim’s main menu bar by pressing Ctrl+M. In order to have a proper replacement for when the menu bar is hidden, we added a menu button to the top left, which contains all of the menu bar’s items.

If you are using Security Labels (XEP-0258) with Gajim, you can now correct labels on messages. Overall handling of Security Labels has been improved as well.

Last but not least, Gajim’s database migration has been improved as well.

This release also comes with many bugfixes. Have a look at the changelog for a complete list.

Gajim

As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab.

June 21, 2024 00:00

June 20, 2024

Erlang Solutions

Exploring Key Trends in Digital Payments

Digital payments are essential to the global economy and have seen rapid and significant changes in recent years.

Let’s take a look at the key trends of this change and some of the emerging digital trends are broadening the payments ecosystem. We’ll look at how payments work and the broader payments ecosystem.

The look into the digital payments landscape

Evolving customer expectations and technological advances are driving innovation. They now prioritise speed, near-to-real-time payments, frictionless transactions and decentralised models. Fuelled by the pandemic, significant growth of digital commerce has led to record payment volumes in most markets. These factors make payments one of the most interesting areas of financial services. There are opportunities for innovative fintechs to provide better client experiences and for traditional players to expand their services.

Market competition is driving down fees. It is a challenge for traditional players to maintain the same levels of profitability while using existing payment infrastructure. We have seen fintech businesses launch into the payments ecosystem, offering a more diverse range of services. 

Traditional payment companies are responding by leveraging the huge amounts of data at their disposal to guide a strategy of adding to their offering. These new services are in areas including loyalty, tailored offers, data insights, risk management and more.

A cashless world leads the way

Consumers’ shift to digital channels drives demand for seamless fulfilment and instant gratification. A recent Capgemini World Payments Report survey found an increase from 24% to 46% in respondents who had e-commerce accounts for more than half of their monthly spending. This is compared from before the pandemic to now.

Digital Payments e-commerce

Retail E-commerce sales worldwide revenue

According to Statista, 91% of the global population is expected to own a smartphone by 2026.

A majority of people have now experienced the efficiencies offered by digital payments. It is unlikely they will ever return to the older, inefficient ways of the past.

Nayapay, one of our clients in the South Asian market is using the MongooseIM chat engine. They are an example of players in the payments space seizing the opportunity to disrupt local markets. 

Their chat-based payments app targets the unbanked in Pakistan. It is built around fusing the penetration of smartphone usage with people’s willingness to integrate transactions into their daily digital activities, adding ease of cashless payments to their everyday lives.

The growing demand for faster payments

Demand for instant transactions is driving change in cross-border payments, international remittances and e-commerce. Mirroring the speed of cash transactions electronically used to be a challenge. Now, the introduction of real-time clearing and settlement facilities across markets makes processing payments almost instant. 

Studies by the Federal Reserve Financial Services saw the strong growth of digital wallets in 2023, (US). Businesses increased their use by 31% from the previous year, and consumers experienced a 32% increase in this digital adoption.

Here are some more statistics on the most popular use cases for faster payments:

digital payments

Source: Federal Reserve study sheet

Peer to Peer transactions

As shown by the chart, one of the most popular cases is person-to-person, or peer-to-peer (P2P) payments. 

Consumers are embracing the simplicity of peer-to-peer services. Zelle, Venmo (US), Kuflink and easyMoney (UK) are commonly used for everyday transactions. These services are important to people seeking quick, hassle-free ways to settle informal payments. 

The availability of P2P services is expected to expand to meet the growing market demand. 

According to Precedence Research, the global peer-to-peer (P2P) lending market size was valued at USD 110.9 billion in 2023. It is expected to hit over USD 1,168.1 billion by 2033. 

Peer to peer digital payments

From peer-to-peer services that enable informal transactions to the widespread adoption of digital payments, consumers are welcoming the future of finance. As technology continues to reshape how we conduct transactions, the prospect of a cashless society becomes more conceivable.

Growth in embedded payments

Lengthy checkout pages are seen as a turn-off to e-commerce customers. Using embedded payments allows you to skip the additional steps. So instead of providing just a single, clickable button on your app or website, the customer can choose their desired payment, such as Klarna, Amazon Pay, or PayPal, then click the embedded link and complete the transaction.

Amazon- pioneering embedded payments

Amazon customers can log into their accounts that already contain stored payment details and shipping addresses. They then use the “Buy Now” button to instantly complete their purchase.

digital payments embedded payments

It requires only a payment confirmation and avoids the need to re-enter payment and shipping information. This quick transaction process takes just seconds and has become commonplace with apps such as Uber, GrubHub, and more.

Integration of embedded finance

By integrating financial products into non-financial platforms, embedded finance is enhancing the convenience and speed of digital payments.

For consumers, embedded finance offers additional benefits including:

  • Better understanding of optimal payment terms for customers
  • Seamless checkouts
  • Easy payment requests
  • Financing options such as buy now, pay later (BNPL), all within a unified customer experience

Beyond BNPL, other financial products like lending and card issuing are also being integrated into these platforms. Major banks can reach millions of new users through Banking-as-a-Service (BaaS) APIs provided to technology businesses and platforms outside the traditional financial services industry.

Leveraging payments data

The diverse range of digital touchpoints involved in a cashless payments ecosystem provides vast amounts of data. 

This is important to banks and fintechs to grow client relationships based on analytics and insights. Companies that can unlock the true value of payment activity data by leveraging artificial intelligence (AI) and machine learning (ML) tools. These can offer more efficient, tailored products and a more secure, protected environment.

The implementation of the messaging standard ISO20022 is a vital part of improving the amount and quality of payment data available. As the global standard for payment messaging, ISO 20022 provides better-structured and more granular data. A shared language to be used for transactions made by anyone, anywhere.

Journey towards digital

The modern digital payments ecosystem is varied. Overall, the journey towards more digital, open and real-time operations mirrors how society at large now lives online.

We help to create digital and mobile payment solutions that enhance the customer experience and protect customer data. We work with clients who provide services, cryptocurrency, blockchain, embedded finance, payment gateways and more. To learn more about our offering, you can contact our team directly.

The post Exploring Key Trends in Digital Payments appeared first on Erlang Solutions.

by Erlang Solutions Team at June 20, 2024 09:55

June 18, 2024

Monal IM

New fundraising campaign

Our current development iPhone 8, which we bought in 2020, is getting on in years, is not able to run iOS 17 and the battery is broken.

So it’s that time again: we are launching a new fundraising campaign for 350 EUR to finance a new development iPhone capable of running iOS 17 and several upcoming iOS versions. Currently we are aiming for an iPhone 13.

You can view our donation options over here: Donate

June 18, 2024 00:00

June 13, 2024

Erlang Solutions

Top 5 Tips to Ensure IoT Security for Your Business

In an increasingly tech-driven world, the implementation of IoT for business is a given. According to the latest data, there are currently 17.08 billion connected IoT devices– and counting. A growing number of devices requires robust IoT security to maintain privacy, protect sensitive data and prevent unauthorised access to connected devices.

A single compromised device can be a threat to an entire network. For businesses, it can lead to major financial losses, operational disruptions and a major impact on brand reputation. We will be taking you through the five key considerations to ensure IoT for businesses including data encryption methods, password management, IoT audits, workplace education and the importance of disabling unused features.

Secure password practices

Weak passwords make IoT devices susceptible to unauthorised access, leading to data breaches, privacy violations and increased security risks. When companies install devices, without changing default passwords or by creating oversimplified ones, they create a gateway entry point for attackers. Implementing strong and unique passwords can ensure the protection of these potential threats.

Password managers

Each device in a business should have its own unique password that should change on a regular basis. According to the 2024 IT Trends Report by JumpCloud, 83% of organisations surveyed use password-based authentication for some IT resources.

Consider using a business-wide password manager to store your passwords securely and that allows you to use unique passwords across multiple accounts. 

Password managers are also incredibly important as they:

  • Help to spot fake websites, protecting you from phishing scams and attacks.
  • Allow you to synchronise passwords across multiple devices, making it easy and safe to log in wherever you are.
  • Track if you are re-using the same password across different accounts for additional security.
  • Spot any password changes that could appear to be a breach of security.

Multi-factor authentication (MFA)

Multi-factor authentication (MFA) adds an additional layer of security. It requires additional verification beyond just a password, such as SMS codes, biometric data or other forms of app-based authentication. You’ll find that many password managers actually offer built-in MFA features for enhanced security.

Some additional security benefits include:

  • Regulatory compliance
  • Safeguarding without password fatigue
  • Easily adaptable to a changing work environment
  • An extra layer of security compared to two-factor authentication (2FA)

As soon as an IoT device becomes connected to a new network, it is strongly recommended that you reset any settings with a secure, complex password. Using password managers allows you to generate unique passwords for each device to secure your IoT endpoints optimally.

Data encryption at every stage

Why is data encryption so necessary? With the increased growth of connected devices, data protection is a growing concern. In IoT, sensitive information (personal data, financial, location etc) is vulnerable to cyber-attacks if transmitted over public networks. When done correctly, data encryption renders personal data unreadable to those who don’t have outside access. Once that data is encrypted, it becomes safeguarded, mitigating unnecessary risks. 

IoT security data encryption

Additional benefits to data encryption

How to encrypt data in IoT devices

There are a few data encryption techniques available to secure IoT devices from threats. Here are some of the most popular techniques:

Triple Data Encryption Standard (Triple DES): Uses three rounds of encryption to secure data, offering a high-level of security used for mission-critical applications.

Advanced Encryption Standard (AES): A commonly used encryption standard, known for its high security and performance. This is used by the US federal government to protect classified information.

Rivest-Shamir-Adleman (RSA): This is based on public and private keys, used for secure data transfer and digital signatures.

Each encryption technique has its strengths, but it is crucial to choose what best suits the specific requirements of your business.

Encryption support with Erlang/Elixir

When implementing data encryption protocols for IoT security, Erlang and Elixir offer great support to ensure secure communication between IoT devices. We go into greater detail about IoT security with Erlang and Elixir in a previous article, but here is a reminder of the capabilities that make them ideal for IoT applications:

  1. Concurrent and fault-tolerant nature: Erlang and Elixir have the ability to handle multiple concurrent connections and processes at the same time. This ensures that encryption operations do not bottleneck the system, allowing businesses to maintain high-performing, reliable systems through varying workloads. 
  2. Built-in libraries: Both languages come with powerful libraries, providing effective tools for implementing encryption standards, such as AES and RSA.
  3. Scalable: Both systems are inherently scalable, allowing for secure data handling across multiple IoT devices. 
  4. Easy integration: The syntax of Elixir makes it easier to integrate encryption protocols within IoT systems. This reduces development time and increases overall efficiency for businesses.

Erlang and Elixir can be powerful tools for businesses, enhancing the security of IoT devices and delivering high-performance systems that ensure robust encryption support for peace of mind.

Regular IoT inventory audits

Performing regular security audits of your systems can be critical in protecting against vulnerabilities. Keeping up with the pace of IoT innovation often means some IoT security considerations get pushed to the side. But identifying weaknesses in existing systems allows organisations to implement much- needed strategy.

Types of IoT security testing

We’ve explained how IoT audits are key in maintaining secure systems. Now let’s take a look at some of the common types of IoT security testing options available:

IoT security testing

IoT security testing types

Firmware software analysis

Firmware analysis is a key part of IoT security testing. It explores the firmware, the core software embedded into the IoT hardware of IoT products (routers, monitors etc). Examining the firmware means security tests can identify any system vulnerabilities, that might not be initially apparent. This improves the overall security of business IoT devices.

Threat modelling

In this popular testing method, security professionals create a checklist based on potential attack methods, and then suggest ways to mitigate them. This ensures the security of systems by offering analysis of necessary security controls.

IoT penetration testing

This type of security testing finds and exploits security vulnerabilities in IoT devices. IoT penetration testing is used to check the security of real-world IoT devices, including the entire ecosystem, not just the device itself.

Incorporating these testing methods is essential to help identify and mitigate system vulnerabilities. Being proactive and addressing these potential security threats can help businesses maintain secure IoT infrastructure, enhancing operational efficiency and data protection.

Training and educating your workforce

Employees can be an entry point for network threats in the workplace. 

The time of BYOD (bring your own devices) where an employee’s work supplies would consist of their laptops, tablets and smartphones in the office to assist with their tasks, is long gone. Now, personal IoT devices are also used in the workplace. Think of your popular wearables like smartwatches, fitness trackers, e-readers and portable game consoles. Even portable appliances like smart printers and smart coffee makers are increasingly popular in office spaces.

Example of increasing IoT devices in the office. Source: House of IT

The use of various IoT devices throughout your business network is the most vulnerable target for cybercrime, using techniques such as phishing and credential hacking or malware. 

Phishing attempts are among the most common. Even the most ‘tech-savvy’ person can fall victim to them. Attackers are skilled at making phishing emails seem legitimate, forging real domains and email addresses to appear like a legitimate business. 

Malware is another popular technique concealed in email attachments, sometimes disguised as Microsoft documents, unassuming to the recipient.

Remote working and IoT business security

Threat or malicious actors are increasingly targeting remote workers. Research by Global Newswire shows that remote working increases the frequency of cyber attacks by a staggering 238%.

The nature of remote employees housing sensitive data on various IoT devices makes the need for training even more important. There is now a rise in companies moving to secure personal IoT devices that are used for home working, with the same high security as they would corporate devices.

How are they doing this? IoT management solutions. They provide visibility and control over other IoT devices. Key players across the IoT landscape are creating increasingly sophisticated IoT management solutions, helping companies administer and manage relevant updates remotely.

The use of IoT devices is inevitable if your enterprise has a remote workforce. 

Regular remote updates for IoT devices are essential to ensure the software is up-to-date and patched. But even with these precautions, you should be aware of IoT device security risks and take steps to mitigate them.

Importance of IoT training

Getting employees involved in the security process encourages awareness and vigilance for protecting sensitive network data and devices.

Comprehensive and regularly updated education and training are vital to prepare end-users for various security threats. Remember that a business network is only as secure as its least informed or untrained employee.

Here are some key points employees need to know to maintain IoT security:

  • The best practices for security hygiene (for both personal and work devices and accounts).
  •  Common and significant cybersecurity risks to your business.
  • The correct protocols to follow if they suspect they have fallen victim to an attack.
  • How to identify phishing, social engineering, domain spoofing, and other types of attacks.

Investing the time and effort to ensure your employees are well informed and prepared for potential threats can significantly enhance your business’s overall IoT security standing.

Disable unused features to ensure IoT security

Enterprise IoT devices come with a range of functionalities. Take a smartwatch, for example. Its main purpose as a watch is of course to tell the time, but it might also include Bluetooth, Near-Field Communication (NFC), and voice activation. If you aren’t using these features, then you’re opening yourself up for hackers to potentially breach your device. Deactivation of unused features reduces the risk of cyberattacks, as it limits the ways for hackers to breach these devices.

Benefits of disabling unused features

If these additional features are not being used, they can create unnecessary security vulnerabilities. Disabling unused features helps to ensure IoT security for businesses in several ways:

  1. Reduces attack surface: Unused features provide extra entry points for attackers. Disabling features limits the number of potential vulnerabilities that could be exploited, in turn reducing attacks overall.
  2. Minimises risk of exploits: Many IoT devices come with default settings that enable features which might not be necessary for business operations. Disabling these features minimises the risk of weak security.
  3. Improves performance and stability: Unused features can consume resources and affect the performance and stability of IoT devices. By disabling them, devices run more efficiently and are less likely to experience issues that could be exploited by attackers.
  4. Simplifies security management: Managing fewer active features simplifies security oversight. It becomes simpler to monitor and update any necessary features.
  5. Enhances regulatory compliance: Disabling unused features can help businesses meet regulatory requirements by ensuring that only the necessary and secure functionalities are active.

To conclude

The continued adoption of IoT is not stopping anytime soon. Neither are the possible risks. Implementing even some of the five tips we have highlighted can significantly mitigate the risks associated with the growing number of devices used for business operations.

Ultimately, investing in your business’s IoT security is all about safeguarding the entire network, maintaining the continuity of day-to-day operations and preserving the reputation of your business. You can learn more about our current IoT offering by visiting our IoT page or contacting our team directly.

The post Top 5 Tips to Ensure IoT Security for Your Business appeared first on Erlang Solutions.

by Erlang Solutions Team at June 13, 2024 11:01

June 10, 2024

Gajim

Gajim 1.9.0

Half a year after the last release, Gajim 1.9.0 is finally here. 🎉 This release brings long awaited support for message replies and message reactions. Message Moderation has been improved as well. Say hello to voice messages! Thank you for all your contributions!

What’s New

It took us quite some time, but now it’s here: Gajim 1.9 comes with a complete database overhaul, which enables new features such as Message Replies and Message Reactions.

Message Replies (XEP-0461: Message Replies) offer rich context, which wasn’t available previously when using message quotes. With Message Replies, Gajim shows you the author’s profile picture, nickname, and also the time the message was sent. Clicking a referenced message will jump to the original message.

Message Replies in Gajim 1.9

Message Replies in Gajim 1.9

Message Reactions (XEP-0444: Message Reactions) allow you to react to messages by using an emoji of your choice. When hovering messages, a floating action menu appears. This action menu offers three quick reactions and even more when clicking on the plus button. Hovering a reaction shows a tooltip containing infos about who sent which reaction - especially useful in group chats.

Message Reactions in Gajim 1.9.0

Message Reactions in Gajim 1.9.0

Message Moderation (XEP-0425: Moderated Message Retraction) has been updated to the latest version while staying compatible with older implementations, thus improving Gajim’s tools against spam.

The new database backend is based on SQLAlchemy and allows us to easily adapt to new requirements of upcoming standards, for example message retraction and rich file transfers.

Thanks to our contributor @mesonium, who brought audio previews to Gajim a year ago, Gajim is now able to record voice messages.

Voice message recording in Gajim 1.9.0

Voice message recording in Gajim 1.9.0

What else changed:

  • Gajim’s message input now offers proper undo/redo functionalities
  • Messages containing only an emoji are now displayed larger
  • Message merging has been improved
  • Notifications now show icons (e.g. a user’s profile picture) in more desktop environments
  • Your connection state is now shown directly above the message input
  • Group chat messages are displayed as ‘pending’ until they have been acknowledged by the server
  • Group chat avatars can now be removed
  • The main menu can now be toggled via Ctrl+M
  • ‘Start Chat’ now shows contact list groups, status messages, and more
  • Issues with using the Ctrl+C shortcut for copying message content have been fixed

This release also comes with many bugfixes. Have a look at the changelog for a complete list.

Gajim

As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab.

June 10, 2024 00:00

June 06, 2024

Erlang Solutions

10 Unusual Blockchain Use Cases

When Blockchain technology was first introduced with Bitcoin in 2009, no one could have foreseen its impact on the world or the unusual cases of blockchain that have emerged. Fast forward to now and Blockchain has become popular for its ability to ensure data integrity in transactions and smart contracts. 

Thanks to its cost-effectiveness,  transparency, speed and top security, it has found its way into many industries, with blockchain spending expected to reach $19 billion this year.

In this post, we will be looking into 10 use cases that have caught our attention, in industries benefiting from Blockchain in unusual and impressive ways.

Ujo Music- Transforming payment for artists

Let’s start exploring the first unusual use case for blockchain with Ujo Music

Ujo Music started with a mission to get artists paid fairly for their music, addressing the issues of inadequate royalties from streaming and complicated copyright laws.

To solve this, they turned to blockchain technology, specifically Ethereum. In using this, Ujo Music was able to create a community that allowed music owners to automatically receive royalty payments. Artists were also able to retain these rights due to smart contracts and cryptocurrencies. This approach allowed artists to access their earnings instantly, without the need to incur fees or have the wait time associated with more traditional systems. 

As previously mentioned, Blockchain also allows for transparency and security, which is key in preventing theft and copyright infringement of the owner’s information. Ujo Music is transforming the payment landscape for artists in the digital, allowing for better management and rights over their music.

Cryptokitties-Buying virtual cats and gaming

For anyone looking to collect and breed digital cats in 2017, Cryptokitties was the place to be. While the idea of a cartoon crypto animation seems incredibly niche, the initial Cryptokitties craze is one that cannot be denied in the blockchain space.

Upon its launch, it immediately went viral, with the alluring tagline “The world’s first Ethereum game.” According to nonfungible.com, the NFT felines saw sales volume spike from just 1,500 on launch day, to 52,000 by the of 2017.

CryptoKitties was among the first projects to harness smart contracts by attaching code to data constructs called tokens on the Ethereum blockchain. Each chunk of the game’s code (which it refers to as a “gene”) describes the attributes of a digital cat. Players buy, collect, sell, and even breed new felines. 

Source: Dapper Labs

Just like individual Ethereum tokens and bitcoins, the cat’s code also ensures that the token representing each cat is unique, which is where the nonfungible token, or NFT, comes in. A fungible good is, by definition, one that can be replaced by an identical item—one bitcoin is as good as any other bitcoin. An NFT, by contrast, has a unique code that applies to no other NFT.

Blockchain can be used in gaming in general by creating digital and analogue gaming experiences. By investing in CryptoKitties, players could invest, build and extend their gaming experience.

ParagonCoin for Cannabis

Our next unusual blockchain case stems from legal cannabis. 

Legal cannabis is a booming business, expected to be worth $1.2 billion by the end of 2024. With this amount of money, a cashless solution offers business owners further security. Transactions are easily trackable and offer transparency and accountability that traditional banking doesn’t.

ParagonCoin business roadmap

Transparency in the legal cannabis space is key for businesses looking to challenge its negative image. ParagonCoin, a cryptocurrency startup had a unique value proposition for its entire ecosystem, making it clear that its business would be used for no illegal activity.

Though recently debunked, ParagonCoin was a pioneer in its field in utilising B2B payments. At the time of its launch, paying for services was only possible with cash, as businesses that were related to cannabis were not allowed to officially have a bank account. 

This creates a dire knock-on effect, making it difficult for businesses to pay for solicitors, staff and other operational costs. The only ways to get an operation running would have been unsafe, inconvenient and possibly illegal. ParagonCoin remedied this by asking businesses to adopt a pseudo-random generator (PRG) payment system to answer the immediate issues. 

Here are some other ways ParagonCoin adopted blockchain technology in their cannabis industry:

  • Regulatory compliance– Simplifying compliance issues on a local and federal level.
  • Secure transactions– Utilising smart contracts to automate and enforce agreement terms, reducing the risk of fraud.
  • Decentralised marketplace– Creating a platform for securely listing and reviewing products and services, while fostering a community of engaged users, businesses and regulators.
  • Innovative business models– The facilitating of crowdfunding to transparently raise business capital.

These cases highlight blockchain technologies’ ability to enhance transparency, compliance, and security, within even the most unexpected industries.

Siemens partnership- Sharing solar power

Siemens has partnered with startup LO3 Energy with an app called Brooklyn Microgrid. This allows residents of Brooklyn who own solar panels to transfer their energy to others who don’t have this capability. Consumers and solar panel owners are in control of the entire transaction.

Residents with solar panels sell excess energy back to their neighbours, in a peer-to-peer transaction. If you’d like to learn more about the importance of peer-to-peer (p2p) networks, you can check out our post about the Principles of Blockchain.

Microgrids reduce the amount of energy that gets lost during transmission. It provides a more efficient alternative since approximately 5% of electricity generated in the US is lost in transit. The Brooklyn microgrid not only minimises these losses but also offers economic benefits to those who have installed solar panels, as well as the local community.

Björn Borg and same-sex marriage

Same-sex marriage is still banned in a majority of countries across the world. With that in mind, the Swedish sportswear brand Björn Borg discovered an ingenious way for loved ones to be in holy matrimony, regardless of sexual orientation on the blockchain. But how?

Blockchain is stereotypically linked with money, but remove those connotations and you have an effective ledger that can record events as well as transactions.

Björn Borg has put this loophole to extremely good use by forming the digital platform Marriage Unblocked, where you can propose, marry and exchange vows all on the blockchain. What’s more, the records can be kept anonymous offering security for those in potential danger, and you get the flexibility of smart contracts.

Of course, you can request a certificate to display proudly too!

Whilst this doesn’t hold any legal requirements, everything is produced and stored online. If religion or government isn’t a primary concern of yours, where’s the harm in a blockchain marriage?

Tangle -Simplifying the Internet of Things (IoT)

Blockchain offers ledgers that can record the huge amounts of data produced by IoT systems. Once again the upside is the level of transparency it offers that simply cannot be found in other services.

The Internet of Things is one of the most exciting elements to come out of technology. The connected ecosystems can record and share various interactions. Blockchain lends itself perfectly to this, as it can transfer data and give identification for both public and private sector use cases. Here is an example:

Public sector- Infrastructure management, taxes (and other municipal services).

Private sector -logistical upgrade, warehousing tracking, greater efficiency, and enhanced data capabilities.

IOTA’s Tangle is a blockchain specifically for IoT which handles machine-to-machine micropayments. It has reengineered distributed ledger technology (DLT), enabling the secure exchange of both value and data.

Tangle is the data structure behind micro-transaction crypto tokens that are purposely optimised and developed for IoT. It differs from other blockchains and cryptocurrencies by having a much lighter, more efficient way to deal with tens of billions of devices. 

It includes a decentralised peer-to-peer network that relies on a Distributed Acyclic Graph (DAG), which creates a distributed ledger rather than “blocks”. There are no transaction fees, no mining, and no external consensus process. This also secures data to be transferred between digital devices.

Walmart and IBM- Improving supply chains

Blockchain’s real-time tracking is essential for any company with a significant number of supply chains. 

Walmart partnered with IBM to produce a blockchain called Hyperledger Fabric blockchain to track foods from the supplier to the shop shelf. When a food-borne disease outbreak occurs, it can take weeks to find the source. Better traceability through blockchain helped save time and lives, allowing companies to act fast and protect affected farms.

Walmart chose blockchain technology as the best option for a decentralised food supply ecosystem. With IBM, they created a food traceability system based on Hyperledger Fabric. 

The food traceability system built for the two products worked and Walmart can now trace the origin of over 25 products from five of its different suppliers using this system.

Agora for elections and voter fraud

Voting on a blockchain offers full transparency, and reduces the chance of voter fraud. A prime example of this is in Sierra Leone, which in 2018 became the first country to run a blockchain-based election, with 70% of the pollers using the technology to anonymously store votes in an immutable ledger. 

Sierra Leone results on the Agora blockchain

These results were placed on Agora’s blockchain and by allowing anyone to view it, the government aimed to provide a level of trust with its citizens. The platform reduced controversy and costs enquired when using paper ballots. 

The result of this is a trustworthy and legitimate result that will also limit the amount of the hearsay from opposition voters and parties, especially in Sierra Leone which has had heavy corruption claims in the past.

MedRec and Dentacoin Healthcare

With the emphasis on keeping many records in a secure manner, blockchain lends itself nicely to medical records and healthcare.

MedRec is one business using blockchain to keep secure files of medical records by using a decentralised CMS and smart contracts. This also allows transparency of data and the ability to make secure payments connected to your health. Blockchain can also be used to track dental care in the same sort of way.

One example is Dentacoin, which uses the global token ERC20. It can be used for dental records but also to ensure dental tools and materials are sourced appropriately, whether tools are used on the correct patients, networks that can transfer information to each other quickly and a compliance tool.

Everledger- Luxury items and art selling

Blockchain’s ability to track data and transactions lends itself nicely to the world of luxury items.

Everledger.io is a blockchain-based platform that enhances transparency and security in supply chain management. It’s particularly used for high-value assets such as diamonds, art, and fine wines. 

The platform uses blockchain technology to create a digital ledger that records the provenance and lifecycle of these assets, ensuring authenticity and preventing fraud. Through offering a tamper-proof digital ledger, Everledger allows stakeholders to trace the origin and ownership history of valuable assets, reducing the risk of fraud and enhancing overall market transparency.

The diamond industry is a great use case of the Everledger platform. 

By recording each diamond’s unique attributes and history on an immutable blockchain, Everledger provides a secure and transparent way to verify the authenticity and ethical sourcing of diamonds. This helps in combating the circulation of conflict diamonds but also builds consumer trust by providing a verifiable digital record of each diamond’s journey from mine to market.

To conclude

While there is a buzz around blockchain, it’s important to note that the industry is well-established, and these surprising cases of blockchain display the broad and exciting nature of the industry as a whole. There are still other advantages to blockchain that we haven’t delved into in this article, but we’ve highlighted one of its greatest advantages for businesses and consumers alike- its transparency.
If you or your business are working on an unusual blockchain case, let us know – we would love to hear about it! Also if you are looking for reliable FinTech or blockchain experts, give us a shout, we offer many services to fix issues of scale.

The post 10 Unusual Blockchain Use Cases appeared first on Erlang Solutions.

by Erlang Solutions Team at June 06, 2024 10:55

ProcessOne

Understanding messaging protocols: XMPP and Matrix

In the world of real-time communication, two prominent protocols often come into discussion: XMPP and Matrix. Both protocols aim to provide robust and secure messaging solutions, but they differ in architecture, features, and community adoption. This article delves into the key differences and similarities between XMPP and Matrix to help you understand which might be better suited for your needs.

What is XMPP?

Overview

XMPP (Extensible Messaging and Presence Protocol) is an open-standard communication protocol originally developed for instant messaging (IM). It was designed as the Jabber protocol in 1999 to aggregate communication across a number of options, such as ICQ, Yahoo Messenger, and MSN. It was standardized by the IETF as RFC 3920 and RFC 3921 in 2004, and later revised as RFC 6120 and RFC 6121 in 2011.

Key Features

  • Decentralized Architecture: XMPP operates on a decentralized network of servers. The protocol is said to be federated. The network of all interconnected XMPP servers is called the XMPP federation.
  • Extensibility: The protocol is highly extensible through XMPP Extension Protocols (XEPs). There are currently more than 400 extensions covering a broad range of use cases like social networking and Internet of Things features through PubSub extensions, Groupchat (aka MUC, Multi-user chat), and VoIP with the Jingle protocol.
  • Security: Supports TLS for encryption and SASL for authentication. End-to-end encryption is available through the OMEMO extension.
  • Interoperability: Widely adopted with numerous clients and servers available.
  • Gateways: Built-in support for gateways to other protocols, allowing for communication across different messaging systems.

Network Protocol Design

  • TCP-Level Stream Protocol: XMPP is based on a TCP-level stream protocol using XML and namespaces. This extensibility while maintaining schema consistency is key. It can also run on top of other protocols like WebSocket or HTTP through the concept of binding.

Use Cases

  • Instant messaging
  • Presence information
  • Multi-user chat (MUC)
  • Social networks
  • Voice and video calls (with extensions)
  • Internet of Things
  • Massive messaging (massive scale messaging platforms like WhatsApp)

What is Matrix?

Overview

Matrix is an open standard protocol for real-time communication, designed to provide interoperability between different messaging systems. It was introduced in 2014 by the Matrix.org Foundation.

Key Features

  • Decentralized Architecture: Like XMPP, Matrix is also decentralized and supports a federated model.
  • Event-Based Model: Uses an event-based architecture where all communications are stored in a distributed database. The conversations are replicated on all servers in the federation that participate in the discussion.
  • End-to-End Encryption: Built-in end-to-end encryption using the Olm and Megolm libraries.
  • Bridging: Strong focus on bridging to other communication systems like Slack, IRC, and XMPP.

Network Protocol Design

  • HTTP-Based Protocol: Matrix uses HTTP for communication and JSON for its data structure, making it suitable for web environments and easy to integrate with web technologies.

Use Cases

  • Instant messaging
  • VoIP and video conferencing
  • Bridging different chat systems

Detailled Comparison

Architecture

  • XMPP: Uses a federated model to build a network of communication that works for both messaging and social networking. The content is not duplicated by default.
  • Matrix: Uses a federated model where each server stores a complete history of conversations, allowing for decentralized control and redundancy.

XMPP is built around an event-based architecture to reach the largest possible scale. Matrix is built around a distributed model that may be more appealing to smaller community servers. As the conversations are distributed, it can cope more easily with servers suffering from frequent disconnections in the federated network.

Extensibility

  • XMPP: Extensible through XEPs that are standardized by the XMPP Standards Foundation, allowing for a wide variety of additional features. As the protocol is based on XML, it can also be extended for custom client features, using your own namespace. The XML schema can be used to define your extension data structure.
  • Matrix: Extensible through modules and APIs, with a strong focus on bridging to other protocols. It is extensible as well and allows custom events and custom properties.

Security

  • XMPP: Supports TLS for secure communication and SASL for authentication. End-to-end encryption is available through extensions like OMEMO.
  • Matrix: Supports TLS for secure communication. Built-in end-to-end encryption using Olm and Megolm, providing robust security out of the box.

Both end-to-end encryption approaches are similar, as they are both based on the same double ratchet encryption algorithm made popular by the Signal messaging platform.

Interoperability

  • XMPP: Known for its interoperability due to its long-standing presence and wide adoption. Includes built-in support for gateways to other protocols.
  • Matrix: Designed with interoperability in mind, with native support for bridging to other protocols. More recent gateways are available. They could be ported to work on both protocols (which would be neat).

Scalability

  • XMPP: By design, XMPP has an edge in terms of scalability. XMPP is event-based and works as a broadcast hub for messages, making it efficient in handling a large number of concurrent users. It is proven to sustain millions of concurrent users.
  • Matrix: Matrix maps conversations to documents that are replicated across servers involved in the discussion. This means the document state needs to be merged and reconciled for each new posted message, which incurs significant overhead in terms of processing power, memory, and storage. Its use case is mainly “organization level” chat, supporting thousands of users, not millions.

Community and Adoption

  • XMPP: Established and widely adopted with a large number of client and server implementations. This can be seen as a drawback, leading to intimidating choices of tools. However, this has proven to be a strength with many competing implementations that have proven to be interoperable. This is a validation of the robustness of the protocol. Initially developed by Jeremy Miller, he cocreated Jabber, Inc to support the first server. The company was later acquired by Cisco. It is now an Internet Engineering Task Force standard used for massive scale deployments and a protocol drive by the non-profit XMPP Standard Foundation.
  • Matrix: Rapidly growing community with increasing adoption, particularly in open-source projects and decentralized applications. The main implementation is developed by Element, the company funded to grow the Matrix protocol.

Conclusion

Both XMPP and Matrix offer robust solutions for real-time communication with their own strengths. XMPP’s long history, extensibility, and efficient scalability make it a reliable choice for traditional instant messaging and presence-based applications, but also social networks, Internet of Things, and workflows that mix human users and devices. On the other hand, Matrix’s architecture, built-in end-to-end encryption, and focus on gateway development make it an excellent choice for those looking to integrate multiple communication systems or require secure corporate messaging through the Element client.

Using a server like ejabberd is a future-proof approach, as it is multiprotocol by design. ejabberd supports XMPP, MQTT, SIP, can act as a VoIP and video call proxy (STUN/TURN), and can federate with the Matrix network. It is likely to support the Matrix client protocol as well in beta in the near future.

Choosing between XMPP and Matrix depends largely on your specific needs, existing infrastructure, and future scalability requirements. Both protocols continue to evolve, offering exciting possibilities for real-time communication.


Mistakes? If you spot a mistake, please reach out to share it! Thanks! I would like this document to be as accurate as possible.

The post Understanding messaging protocols: XMPP and Matrix first appeared on ProcessOne.

by Mickaël Rémond at June 06, 2024 08:04

The XMPP Standards Foundation

The XMPP Newsletter May 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of May 2024.

XMPP and Google Summer of Code 2024

The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and will kick-off with coding now:

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

XMPP Videos

Debian and XMPP in Wind and Solar Measurement talk at MiniDebConf Berlin 2024.

XMPP Articles

XMPP Software News

XMPP Clients and Applications

XMPP Servers

XMPP Web as Openfire plugin

XMPP Web as Openfire plugin

XMPP Libraries & Tools

Slixfeed News Bot

Slixfeed News Bot

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEP was proposed this month.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 2.5.0 of XEP-0030 (Service Discovery)
    • Add note about some entities not advertising the feature. (pep)
  • Version 1.34.6 of XEP-0045 (Multi-User Chat)
    • Remove contradicting keyword on sending subject in §7.2.2. (pep)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • XEP-0421: Anonymous unique occupant identifiers for MUCs
  • XEP-0440: SASL Channel-Binding Type Capability

Stable

  • Version 1.0.0 of XEP-0398 (User Avatar to vCard-Based Avatars Conversion)
    • Accept as Stable as per Council Vote from 2024-04-30. (XEP Editor (dg))

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: [xmpp.org/categories/newsletter/]
    • Translators: Gonzalo Raúl Nemmi

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

June 06, 2024 00:00

June 02, 2024

Remko Tronçon

Packaging Swift apps for Alpine Linux

While trying to build my Age Apple Secure Enclave plugin, a small Swift CLI app, on Alpine Linux, I realized that the Swift toolchain doesn’t run on Alpine Linux. The upcoming Swift 6 will support (cross-)compilation to a static (musl-based) Linux target, but I suspect an Alpine Linux version of Swift itself isn’t going to land soon. So, I explored some alternatives for getting my Swift app on Alpine.

by Remko Tronçon at June 02, 2024 00:00

May 30, 2024

Erlang Solutions

7 Key Blockchain Principles for Business

Welcome to the final instalment of our Blockchain for Business series. Here, we are taking a look at the seven fundamental principles that make blockchain: Immutability, decentralisation 

‘workable’ consensus, distribution and resilience, transactional automation (including ‘smart contracts’), transparency and trust, and links to the external world.

For business leaders, understanding these core principles is crucial in harnessing the potential for building trust, spearheading innovation and driving overall business efficiency. 

If you missed the previous blog, feel free to learn all about the strengths of Erlang and Elixir in blockchain here.

Now let’s discuss how these seven principles can be leveraged to transform business operations.

Understanding the Core Concepts

In a survey conducted by EY, over a third (38%) of US workers surveyed said that blockchain technology is widely used within their businesses. A further 44% said the tech would be widely used within three years and 18% reported that they were still a few years away from being widely used within their business.

To increase the adoption of blockchain, it is key to understand its principles, how it operates, and the advantages it offers across various industries, such as financial services, retail, advertising and marketing, and digital health.

Immutability

In an ideal world, we would want to keep an accurate record of events and make sure it doesn’t degrade over time due to natural events, human error, or fraud. While physical items can change over time, digital information can be continuously corrected to prevent deterioration.

Implementing an immutable blockchain aims to maintain a digital history that remains unaltered over time. This is especially useful for businesses when it comes to assessing the ownership or the authenticity of an asset or to validate one or more transactions. In the context of legalities and business regulation, having an immutable record of transactions is key as this can save time and resources by streamlining these processes.

In a well-designed blockchain, data is encoded using hashing algorithms. This ensures that only those with sufficient information can verify a transaction. This is typically implemented on top of Merkle trees, where hashes of combined hashes are calculated.

Merkle tree or hash tree

In a well-designed blockchain, data is encoded using hashing algorithms. This ensures that only those with sufficient information can verify a transaction.

Challenges raised by business leaders

Legitimate questions can be raised by business leaders about storing an immutable data structure:

  • Scalability: How is the increasing volume of data handled once it surpasses ledger capacities?
  • Impact of decentralisation: What effect does growing data history and validation complexity have on decentralisation and participant engagement?
  • Performance verification: How does verification degrade as data history expands, particularly during peak usage?
  • Risk mitigation: How can we ensure consensus and prevent fragmented networks or unauthorised forks in transaction history?

Businesses face challenges in managing growing data, maintaining decentralisation, verifying transactions, and preventing risks in immutable data storage. Meeting regulations also add complexity, and deciding what data to store must consider sensitivity.

Addressing regulatory challenges

Compliance with GDPR introduces challenges, especially concerning the “right to be forgotten.” This is important because fines for breaches of GDPR are potentially very severe for non-compliance. The solutions introduced so far effectively aim at anonymising the information that enters the immutable on-chain storage process, while sensitive information is stored separately in support databases where this information can be deleted if required. 

The challenge lies in determining upfront what information is considered sensitive and suitable for inclusion in the immutable record.. A wrong choice has the potential to backfire at a later stage if any involved actor manages to extract or trace sensitive information through immutable history.

Immutability in blockchain technology provides a solution to preserving accurate historical records, ensuring the authenticity and ownership of assets, streamlining transaction validation, and saving businesses time and resources. But it also has its challenges, such as managing data volumes, maintaining decentralisation, and ensuring it is complying with regulations, for example, GDPR. Despite these challenges, businesses can leverage immutable blockchain technology to modernise record-keeping practices and uphold the integrity of their operations.

Decentralisation of control

Remember the 2008 financial crash? One of the reactions following this crisis was against over-centralisation. 

In response to the movement towards decentralisation, businesses have acknowledged the potential for innovation and adaptation. Embracing decentralisation not only aligns with consumer values of independence and democratic fairness, but it also presents opportunities for businesses to explore new markets and develop innovative products and services, as well as implement decentralised governance models within their own organisations.

Use cases for decentralisation

There are many ways in which businesses can leverage blockchain technology in order to embrace decentralisation and unlock new growth opportunities:

Decentralised finance (DeFi): DeFi platforms leverage blockchain technology to provide financial services without the need for intermediaries, such as banks or brokerages.

Supply chain management: By recording every transaction on a blockchain ledger, businesses can track the movement of goods from the point of origin to the end consumer. 

Smart contracts: Automatically enforce and execute contractual agreements when predefined conditions are met, also without the need for intermediaries. 

Tokenisation of assets: Businesses can turn their assets into digital tokens. This helps split ownership into smaller parts, making it easier to buy and sell, and allowing direct trading between people without intermediaries.

Identity management: Blockchain-based identity management systems offer secure and decentralised solutions. Businesses can use blockchain to verify the identity of customers, employees, and partners while giving people greater control over their data. 

Data management and monetisation: Blockchain allows for businesses to securely manage and monetise data by giving individuals control over their data, facilitating direct transactions between data owners and consumers. 

Further considerations of decentralisation

With full decentralisation, there is no central authority to resolve potential transactional issues. Traditional, centralised systems have well-developed anti-fraud and asset recovery mechanisms which people have become used to. 

Using new, decentralised technology places a far greater responsibility on the user if they are to receive all of the benefits of the technology, forcing them to take additional precautions when it comes to handling and storing their digital assets.

There has no point in having an ultra-secure blockchain if one then hands over one’s wallet private key to an intermediary whose security is lax: it’s like having the most secure safe in the world and then writing the combination on a whiteboard in the same room.

Decentralisation, security, and usability

For businesses, embracing decentralisation unlocks new opportunities while posing challenges in security and usability. Balancing these factors is key as businesses continue to navigate decentralised technologies, shaping the future of commerce and industry. 

Businesses must consider whether the increased level of personal responsibility associated with secure blockchain implementation is a price users are willing to pay, or if they will trade off some security for ease of use and potentially more centralisation.

Workable Consensus

As businesses are increasingly pushing towards decentralised forms of control and responsibility, it has since been brought to light the fundamental requirement to validate transactions without a central authority; known as the ‘consensus’ problem. The blockchain industry has seen various approaches emerge to address this, with some competing and others complementing each other.

There’s been a lot of attention on governance in blockchain ecosystems. This involves regulating how quickly new blocks are added to the chain and the rewards for miners (especially in proof-of-work blockchains). Overall, it’s crucial to set up incentives and deterrents so that everyone involved helps the chain grow healthily.

Besides serving as an economic deterrent against denial of service and spam attacks, Proof of Work (POW) approaches are amongst the first attempts to automatically work out, via the use of computational power, which ledgers/actors have the authority to create/mine new blocks. Similar approaches (proof of space, proof of bandwidth etc) have followed, but all of them are vulnerable to deviations from the intended fair distribution of control.

Proof of work algorithm

How do these methods benefit businesses? It gives them an edge by purchasing powerful hardware in bulk and running it in areas with cheaper electricity. This can help to outpace competitors in mining new blocks and gaining control, ultimately centralising authority. 

In response to the challenges brought on by centralised control and environmental concerns associated with traditional mining methods, alternative approaches such as Proof of Stake (POS) and Proof of Importance (POI) have emerged. These methods remove the focus from computing resources and tie authority to accumulated digital asset wealth or participant productivity. However, implementing POS and POI while mitigating the risk of power and wealth concentration could present significant challenges for developers and business leaders alike.

Distribution and resilience

Apart from decentralising authority, control and governance, blockchain solutions typically embrace a distributed peer-to-peer (P2P) design paradigm. 

This preference is motivated by the inherent resilience and flexibility that these types of networks have introduced and demonstrated, particularly in the context of file and data sharing. A centralised network, typical of mainframes and centralised services is exposed to a ‘single point of failure’ vulnerability as the operations are always routed towards a central node.

If the central node breaks down or is congested, all the other nodes will be affected by disruptions. In a business context, decentralised and distributed networks attempt to reduce the detrimental effects that issues occurring on a node might trigger on other nodes. In a decentralised network, the failure of a node can still affect several neighbouring nodes that rely on it to carry out their operations. In a distributed network the idea is that the failure of a single node should not impact significantly any other node. Even when one preferential/optimal route in the network becomes congested or breaks down entirely, a message can still reach the destination via an alternative route. 

This greatly increases the chance of keeping a service available in the event of failure or malicious attacks such as a denial of service (DOS) attack. Blockchain networks with a distributed ledger redundancy are known for their resilience against hacking, especially when it comes to very large networks, such as Bitcoin. In such a highly distributed network, the resources needed to generate a significant disruption are very high, which not only delivers on the resilience requirement but also works as a deterrent against malicious attacks (mainly because the cost of conducting a successful malicious attack becomes prohibitive).

Although a distributed topology can provide an effective response to failures or traffic spikes, businesses need to be aware that delivering resilience against prolonged over-capacity demands or malicious attacks requires adequate adapting mechanisms. While the Bitcoin network is well positioned, as it currently benefits from a high capacity condition (due to the historically high incentive to purchase hardware by third-party miners), this is not the case for other emerging networks as they grow in popularity. This is where novel instruments, capable of delivering preemptive adaptation combined with back pressure throttling applied to the P2P level, can be of great value.

Distributed systems are not new and, whilst they provide highly robust solutions to many enterprise and governmental problems, they are subject to the laws of physics and require their architects to consider the trade-offs that need to be made in their design and implementation (e.g. consistency vs availability).

Automation

A high degree of automation is required for businesses to sustain a coherent, fair and consistent blockchain and surrounding ecosystem. Existing areas with a high demand for automation include those common to most distributed systems. For example; deployment, elastic topologies, monitoring, recovery from anomalies, testing, continuous integration, and continuous delivery. 

For blockchains, these represent well-established IT engineering practices. Additionally, there is a creative R&D effort to automate the interactions required to handle assets, computational resources and users across a range of new problem spaces (e.g. logistics, digital asset creation and trading).

The trend of social interactions has seen a significant shift towards scripting for transactional operations. This is where smart contracts and constrained virtual machine (VM) interpreters have emerged – an effort pioneered by the Ethereum project

Many blockchain enthusiasts are drawn to the ability to set up asset exchanges, specifying conditions and actions triggered by certain events. Smart contracts find various applications in lotteries, digital asset trading, and derivative trading. However, despite the exciting potential of smart contracts, getting involved in this area requires a significant level of expertise. Only skilled developers who are willing to invest time in learning Domain Specific Languages (DSL) can create and modify these contracts.

The challenge is to respond to safety and security concerns when smart contracts are applied to edge case scenarios that deviate from the ‘happy path’. If badly designed contracts cannot properly roll back or undo a miscarried transaction, their execution might lead to assets being lost or erroneously handed over to unwanted receivers.

Automation and governance

Another area in high need of automation is governance. Any blockchain ecosystem of users and computing resources requires periodic configurations of the parameters to carry on operating coherently and consensually. This results in a complex exercise of tuning for incentives and deterrents to guarantee the fulfilment of ambitious collaborative and decentralised goals. The newly emerging field of ‘blockchain economics’ (combining economics; game theory; social science and other disciplines) remains in its infancy.

The removal of a central ruling authority produces a vacuum that needs to be filled by an adequate decision-making body, which is typically supplied with automation that maintains a combination of static and dynamic configuration settings. Those consensus solutions referred to earlier which use computational resources or social stackable assets to assign the authority, not only to produce blocks but also to steer the variable part of governance, have succeeded in filling the decision-making gap in a fair and automated way. Successively, the exploitation of flaws in the static element of governance has hindered the success of these models. This has contributed to the rise in popularity of curated approaches such as POA or DPOS, which not only bring back centralised control but also reduce the automation of governance.

This a major area of evolution in blockchain where we expect to see major widespread market adoption.

Transparency and trust

For businesses to produce the desired audience engagement for blockchain and eventual mass adoption and success, consensus and governance mechanisms need to operate transparently. Users need to know who has access to what data so that they can decide what can be stored and possibly shared on-chain. These are the contractual terms by which users agree to share their data. As previously discussed, users might be required to exercise the right for their data to be deleted, which typically is a feature delivered via auxiliary, ‘off-chain’ databases. In contrast, only hashed information, effectively devoid of its meaning, is preserved permanently on-chain.

Given the immutable nature of the chain history, it is important to decide upfront what data should be permanently written on-chain and what gets written off-chain. The users should be made aware of what data gets stored on-chain and with whom it could potentially be shared. Changing access to on-chain data or deleting it goes against the fundamentals of immutability and therefore is almost impossible. Getting that decision wrong at the outset can significantly affect the cost and usability (and therefore likely adoption) of the particular blockchain in question.

Besides transparency, trust is another critical feature that users and customers legitimately seek. Trust has to go beyond the scope of the people involved as systems need to be trusted as well. Every static element, such as an encryption algorithm, the dependency on a library, or a fixed configuration, is potentially exposed to vulnerabilities.

Link to the external world

The attractive features that blockchain has brought to the internet market would be limited to handling digital assets unless there was a way to link information to the real world. Embracing blockchain solely within digital boundaries may diminish its appeal, as businesses seek solutions that integrate seamlessly with the analogue realities of our lives.

Technologies used to overcome these limitations include cyber-physical devices such as sensors for input and robotic activators for output, and in most circumstances, people and organisations. As we read through most blockchain white papers, we occasionally come across the notion of the Oracle, which in short, is a way to name an input coming from a trusted external source that could potentially trigger/activate a sequence of transactions in a Smart Contract or which can otherwise be used to validate some information that cannot be validated within the blockchain itself.

Blockchain oracles connecting blockchains to inputs and outputs

Bitcoin and Ethereum, still the two dominant projects in the blockchain space are viewed by many investors as an opportunity to diversify a portfolio or speculate on the value of their respective cryptocurrencies. The same applies to a wide range of other cryptocurrencies except fiat pegged currencies, most notably Tether, where the value is effectively bound to the US dollar. Conversions from one cryptocurrency to another and to/from fiat currencies are normally operated by exchanges on behalf of an investor. These are again peripheral services that serve as a link to the external physical world. For businesses, these exchanges provide crucial services that facilitate investment and trading activities, contributing to the broader ecosystem of blockchain-based assets.

Besides oracles and cyber-physical links, interest is emerging in linking smart contracts together to deliver a comprehensive solution. Contracts could indeed operate in a cross-chain scenario to offer interoperability among a variety of digital assets and protocols. Although attempts to combine different protocols and approaches have emerged, this is still an area where further R&D is necessary to provide enough instruments and guarantees to developers and entrepreneurs. The challenge is to deliver cross-chain functionalities without the support of a central governing agency/body.

Concluding blockchain for business

As we’ve highlighted throughout the series, blockchain provides real transformative potential across varying business industries. For a business to truly leverage this technology, the fundamentals we have highlighted must be understood to navigate the complexities of blockchain adoption successfully. 

If you want to start a conversation with the team, feel free to drop us a line.

The post 7 Key Blockchain Principles for Business appeared first on Erlang Solutions.

by Erlang Solutions Team at May 30, 2024 09:46

Blockchain Tech Deep Dive 2/4 | Myths vs. Realities

This is the second part of our ‘Making Sense of Blockchain’ blog post series – you can read part 1 on ‘6 Blockchain Principles’ here. This article is based on the original post by Dominic Perini here.

Join our FinTech mailing list for more great content and industry and events news, sign up here >>

With so much hype surrounding blockchain, we separate the reality from the myths to ensure delivery of the ROI and competitive advantage that you need.
It’s not our aim here to discuss the data structure of blockchain itself, issues like those of transactions per second (TPS) or questions such as ‘what’s the best Merkle tree solution to adopt?’. Instead, we shall examine the state of maturity of blockchain technology and its alignment with the core principles that underpin a distributed ledger ecosystem.

Blockchain technology aims to embrace the following high-level principles:

7 founding principles of blockchain

  • Immutability 
  • Decentralisation 
  • ‘Workable’ consensus
  • Distribution and resilience
  • Transactional automation (including ‘smart contracts’)
  • Transparency and Trust
  • A link to the external world

Immutability of history

In an ideal world it would be desirable to preserve an accurate historical trace of events, and make sure this trace does not deteriorate over time, whether through natural events, human error or by the intervention of fraudulent actors. Artefacts produced in the analogue world face alterations over time while in the digital world the quantized / binary nature of stored information provides the opportunity for continuous corrections to prevent deterioration that might occur over time.

Writing an immutable blockchain aims to retain a digital history that cannot be altered over time. This is particularly useful when it comes to assessing the ownership or the authenticity of an asset or to validate one or more transactions.

We should note that, on top of the inherent immutability of a well-designed and implemented blockchain, hashing algorithms provide a means to encode the information that gets written in the history so that the capacity to verify a trace/transaction can only be performed by actors possessing sufficient data to compute the one-way cascaded encoding/encryption. This is typically implemented on top of Merkle trees where hashes of concatenated hashes are computed.

Legitimate questions can be raised about the guarantees for indefinitely storing an immutable data structure:

  • If this is an indefinitely growing history, where can it be stored once it grows beyond the capacity of the ledgers?
  • As the history size grows (and/or the computing power needed to validate further transactions increases) this reduces the number of potential participants in the ecosystem, leading to a de facto loss of decentralisation. At what point does this concentration of ‘power’ create concerns?
  • How does verification performance deteriorate as the history grows?
  • How does it deteriorate when a lot of data gets written on it concurrently by users?
  • How long is the segment of data that you replicate on each ledger node?
  • How much network traffic would such replication generate?
  • How much history is needed to be able to compute a new transaction?
  • What compromises need to be made on linearisation of the history, replication of the information, capacity to recover from anomalies and TPS throughput?


Further to the above questions, how many replicas converging to a specific history (i.e. consensus) are needed for it to carry on existing? And in particular:

  • Can a fragmented network carry on writing to their known history?
  • Is an approach designed to ‘heal’ any discrepancies in the immutable history of transactions by rewarding the longest fork, fair and efficient?
  • Are the deterrents strong enough to prevent a group of ledgers forming their own fork that eventually reaches wider adoption?


Furthermore, a new requirement to comply with the General Data Protection Regulations (GDPR) in Europe and ‘the right to be forgotten’ introduces new challenges to the perspective of keeping permanent and immutable traces indefinitely. This is important because fines for breaches of GDPR are potentially very severe. The solutions introduced so far effectively aim at anonymising the information that enters the immutable on-chain storage process, while sensitive information is stored separately in support databases where this information can be deleted if required. None of these approaches has yet been tested by the courts. 

The challenging aspect here is to decide upfront what is considered sensitive and what can safely be placed on the immutable history. A wrong choice can backfire at a later stage in the event that any involved actor manages to extract or trace sensitive information through the immutable history.

Immutability represents one of the fundamental principles that motivate the research into blockchain technology, both private and public. The solutions explored so far have managed to provide a satisfactory response to the market needs via the introduction of history linearisation techniques, one-way hashing encryptions, merkle trees and off-chain storage, although the linearity of the immutable history comes at a cost (notably transaction volume).

Decentralisation of control

One of the reactions following the 2008 global financial crisis was against over-centralisation. This led to the exploration of various decentralised mechanisms. The proposition that individuals would like to enjoy the freedom to be independent of a central authority gained in popularity. Self-determination, democratic fairness and heterogeneity as a form of wealth are among the dominant values broadly recognised in Western (and, increasingly, non-Western) society. These values added weight to the movement that introducing decentralisation in a system is positive.

With full decentralisation, there is no central authority to resolve potential transactional issues for us. Traditional, centralised systems have well developed anti-fraud and asset recovery mechanisms which people have become used to. Using new, decentralised technology places a far greater responsibility on the user if they are to receive all of the benefits of the technology, forcing them to take additional precautions when it comes to handling and storing their digital assets.

There’s no point having an ultra-secure blockchain if one then hands over one’s wallet private key to an intermediary whose security is lax: it’s like having the most secure safe in the world then writing the combination on a whiteboard in the same room.

Is the increased level of personal responsibility that goes with the proper implementation of a secure blockchain a price that users are willing to pay? Or, will they trade off some security in exchange for ease of use (and, by definition, more centralisation)? 

Consensus

The consistent push towards decentralised forms of control and responsibility has brought to light the fundamental requirement to validate transactions without a central authority; known as the ‘consensus’ problem. Several approaches have grown out of the blockchain industry, some competing and some complementary.

There has also been a significant focus on the concept of governance within a blockchain ecosystem. This concerns the need to regulate the rates at which new blocks are added to the chain and the associated rewards for miners (in the case of blockchains using proof of work (POW) consensus methodologies). More generally, it is important to create incentives and deterrent mechanisms whereby interested actors contribute positively to the healthy continuation of chain growth.

Besides serving as an economic deterrent against denial of service and spam attacks, POW approaches are amongst the first attempts to automatically work out, via the use of computational power, which ledgers/actors have the authority to create/mine new blocks. Other similar approaches (proof of space, proof of bandwidth etc) followed, however, they all suffer from exposure to deviations from the intended fair distribution of control. Wealthy participants can, in fact, exploit these approaches to gain an advantage via purchasing high performance (CPU / memory / network bandwidth) dedicated hardware in large quantities and operating it in jurisdictions where electricity is relatively cheap. This results in overtaking the competition to obtain the reward, and the authority to mine new blocks, which has the inherent effect of centralising the control. Also, the huge energy consumption that comes with the inefficient nature of the competitive race to mine new blocks in POW consensus mechanisms has raised concerns about its environmental impact and economic sustainability.

Proof of Stake (POS) and Proof of Importance (POI) are among the ideas introduced to drive consensus via the use of more social parameters, rather than computing resources. These two approaches link the authority to the accumulated digital asset/currency wealth or the measured productivity of the involved participants. Implementing POS and POI mechanisms, whilst guarding against the concentration of power/wealth, poses not insubstantial challenges for their architects and developers.

More recently, semi-automatic approaches, driven by a human-curated group of ledgers, are putting in place solutions to overcome the limitations and arguable fairness of the above strategies. The Delegated Proof of Stake (DPOS) and Proof of Authority (POA) methods promise higher throughput and lower energy consumption, while the human element can ensure a more adaptive and flexible response to potential deviations caused by malicious actors attempting to exploit a vulnerability in the system.

Distribution and resilience

Apart from a decentralising authority, control and governance, blockchain solutions typically embrace a distributed peer to peer (P2P) design paradigm. This preference is motivated by the inherent resilience and flexibility that these types of networks have introduced and demonstrated, particularly in the context of file and data sharing.

A centralised network, typical of mainframes and centralised services is clearly exposed to a ‘single point of failure’ vulnerability as the operations are always routed towards a central node. In the event that the central node breaks down or is congested, all the other nodes will be affected by disruptions.

Decentralised and distributed networks attempt to reduce the detrimental effects that issues occurring on a node might trigger on other nodes. In a decentralised network, the failure of a node can still affect several neighbouring nodes that rely on it to carry out their operations. In a distributed network the idea is that the failure of a single node should not impact significantly any other node. In fact, even when one preferential/optimal route in the network becomes congested or breaks down entirely, a message can reach the destination via an alternative route. This greatly increases the chance of keeping a service available in the event of failure or malicious attacks such as a denial of service (DOS) attack.

Blockchain networks where a distributed topology is combined with a high redundancy of ledgers backing a history have occasionally been declared ‘unhackable’ by enthusiasts or, as some more prudent debaters say, ‘difficult to hack’. There is truth in this, especially when it comes to very large networks such as that of Bitcoin. In such a highly distributed network, the resources needed to generate a significant disruption are very high, which not only delivers on the resilience requirement but also works as a deterrent against malicious attacks (principally because the cost of conducting a successful malicious attack becomes prohibitive).

Although a distributed topology can provide an effective response to failures or traffic spikes, you need to be aware that delivering resilience against prolonged over-capacity demands or malicious attacks requires adequate adapting mechanisms. While the Bitcoin network is well positioned, as it currently benefits from a high capacity condition (due to the historical high incentive to purchase hardware by third-party miners), this is not the case for other emerging networks as they grow in popularity. This is where novel instruments, capable of delivering preemptive adaptation combined with back pressure throttling applied to the P2P level, can be of great value.

Distributed systems are not new and, whilst they provide highly robust solutions to many enterprise and governmental problems, they are subject to the laws of physics and require their architects to consider the trade-offs that need to be made in their design and implementation (e.g. consistency vs availability).

Automation

In order to sustain a coherent, fair and consistent blockchain and surrounding ecosystem, a high degree of automation is required. Existing areas with a high demand for automation include those common to most distributed systems. For instance; deployment, elastic topologies, monitoring, recovery from anomalies, testing, continuous integration, and continuous delivery. In the context of blockchains, these represent well-established IT engineering practices. Additionally, there is a creative R&D effort to automate the interactions required to handle assets, computational resources and users across a range of new problem spaces (e.g. logistics, digital asset creation and trading).

The trend of social interactions has seen a significant shift towards scripting for transactional operations. This is where smart contracts and constrained virtual machine (VM) interpreters have emerged – an effort pioneered by the Ethereum project.

The ability to define how to operate an asset exchange, by which conditions and actioned following which triggers, has attracted many blockchain enthusiasts. Some of the most common applications of smart contracts involve lotteries, trade of digital assets and derivative trading. While there is clearly exciting potential unleashed by the introduction of smart contracts, it is also true that it is still an area with a high entry barrier. Only skilled developers that are willing to invest time in learning Domain Specific Languages (DSL) have access to the actual creation and modification of these contracts.

The challenge is to respond to safety and security concerns when smart contracts are applied to edge case scenarios that deviate from the ‘happy path’. If badly-designed contracts cannot properly rollback or undo a miscarried transaction, their execution might lead to assets being lost or erroneously handed over to unwanted receivers.

Another area in high need for automation is governance. Any blockchain ecosystem of users and computing resources requires periodic configurations of the parameters to carry on operating coherently and consensually. This results in a complex exercise of tuning for incentives and deterrents to guarantee the fulfilment of ambitious collaborative and decentralised goals. The newly emerging field of ‘blockchain economics’ (combining economics; game theory; social science and other disciplines) remains in its infancy.

Clearly, the removal of a central ruling authority produces a vacuum that needs to be filled by an adequate decision-making body, which is typically supplied with automation that maintains a combination of static and dynamic configuration settings. Those consensus solutions referred to earlier which use computational resources or social stackable assets to assign the authority, not only to produce blocks but also to steer the variable part of governance, have succeeded in filling the decision making gap in a fair and automated way. Successively, the exploitation of flaws in the static element of governance has hindered the success of these models. This has contributed to the rise in popularity of curated approaches such as POA or DPOS, which not only bring back a centralised control but also reduce the automation of governance.

We expect this to be one of the major areas where blockchain has to evolve in order to succeed in getting widespread market adoption.

Transparency and trust

In order to produce the desired audience engagement for blockchain and eventual mass adoption and success, consensus and governance mechanisms need to operate transparently. Users need to know who has access to what data so that they can decide what can be stored and possibly shared on-chain. These are the contractual terms by which users agree to share their data. As previously discussed users might be required to exercise the right for their data to be deleted, which typically is a feature delivered via auxiliary, ‘off-chain’ databases. In contrast, only hashed information, effectively devoid of its meaning, is preserved permanently on-chain.

Given the immutable nature of the chain history, it is important to decide upfront what data should be permanently written on-chain and what gets written off-chain. The users should be made aware of what data gets stored on-chain and with whom it could potentially be shared. Changing access to on-chain data or deleting it goes against the fundamentals of immutability and therefore is almost impossible. Getting that decision wrong at the outset can significantly affect the cost and usability (and therefore likely adoption) of the particular blockchain in question.

Besides transparency, trust is another critical feature that users legitimately seek. Trust has to go beyond the scope of the people involved as systems need to be trusted as well. Every static element, such as an encryption algorithm, the dependency on a library, or a fixed configuration, is potentially exposed to vulnerabilities.

Link to the external world

The attractive features that blockchain has brought to the internet market would be limited to handling digital assets unless there was a way to link information to the real world. It is safe to say that there would be less interest if we were to accept that a blockchain can only operate under the restrictive boundaries of the digital world, without connecting to the analog real world in which we live.

Technologies used to overcome these limitations including cyber-physical devices such as sensors for input and robotic activators for output, and in most circumstances, people and organisations. As we read through most blockchain white papers we occasionally come across the notion of the Oracle, which in short, is a way to name an input coming from a trusted external source that could potentially trigger/activate a sequence of transactions in a Smart Contract or which can otherwise be used to validate some information that cannot be validated within the blockchain itself.

Bitcoin and Ethereum, still the two dominant projects in the blockchain space are viewed by many investors as an opportunity to diversify a portfolio or speculate on the value of their respective cryptocurrency. The same applies to a wide range of other cryptocurrencies with the exception of fiat pegged currencies, most notably Tether, where the value is effectively bound to the US dollar. Conversions from one cryptocurrency to another and to/from fiat currencies are normally operated by exchanges on behalf of an investor. These are again peripheral services that serve as a link to the external physical world.

Besides oracles and cyber-physical links, interest is emerging in linking smart contracts together to deliver a comprehensive solution. Contracts could indeed operate in a cross-chain scenario to offer interoperability among a variety of digital assets and protocols. Although attempts to combine different protocols and approaches have emerged, this is still an area where further R&D is necessary in order to provide enough instruments and guarantees to developers and entrepreneurs. The challenge is to deliver cross-chain functionalities without the support of a central governing agency/body.

* originally published 2018 by Dominic Perini

For any business size in any industry, we’re ready to investigate, build and deploy your blockchain-based project on time and to budget.

Let’s talk

If you want to start a conversation about engaging us for your fintech project or talk about partnering and collaboration opportunities, please send our Fintech Lead, Michael Jaiyeola, an email or connect with him via Linkedin.

The post Blockchain Tech Deep Dive 2/4 | Myths vs. Realities appeared first on Erlang Solutions.

by Erlang Solutions Team at May 30, 2024 09:06

May 28, 2024

The XMPP Standards Foundation

Scaling up with MongooseIM 6.2.1

MongooseIM is a scalable, extensible and efficient real-time messaging server that allows organisations to build cost-effective communication solutions. Built on the XMPP server, MongooseIM is specifically designed for businesses facing the challenge of large deployments, where real-time communication and user experience are critical. The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend, which simplifies and enhances its scalability.

It is difficult to predict how much traffic your XMPP server will need to handle. This is why MongooseIM offers several means of scalability. Firstly, even one machine can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. As a result, it is recommended to have a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol. Having at least three nodes in the cluster allows you to perform a rolling upgrade, where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance. During such an upgrade, you can increase hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier, because you only need to add new nodes to the already deployed cluster.

Mnesia

Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey, because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

  1. Consistency issues, which tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
  2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
  3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data shared between the cluster nodes. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

Introducing CETS

Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to keep all your persistent data.

Getting rid of Mnesia removes a lot of important obstacles. For example, if you are using Kubernetes, MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up horizontal autoscaling for your installation.

See it in action

If you want quickly set up a working autoscaled MIM cluster using Helm, see the detailed blog post. For more information, consult the documentation, GitHub or the product page. You can try MongooseIM online as well.

Read about Erlang Solution as sponsor of the XSF.

May 28, 2024 00:00