Planet Jabber

March 18, 2024

Erlang Solutions

Guess Less with Erlang Doctor

BEAM languages, such as Erlang and Elixir, offer a powerful tracing mechanism, and Erlang Doctor is built on top of it. It stores function calls and messages in an ETS table, which lowers the impact on the traced system, and enables querying and analysis of the collected traces. Being simple, always available and easy to use, it encourages you to pragmatically investigate system logic rather than guess about the reason for its behaviour.
This blog post is based on a talk I presented at the FOSDEM 2024 conference.

Introduction

It is tough to figure out why a piece of code is failing, or how unknown software is working. When confronted with an error or other unusual system behaviour, we might search for the reason in the code, but it is often unclear what to look for, and tools like grep can give a large number of results. This means that there is some guessing involved, and the less you know the code, the less chance you have of guessing correctly. BEAM languages such as Erlang and Elixir include a tracing mechanism, which is a building block for tools like dbg, recon or redbug. They let you set up tracing for specific functions, capture the calls, and print them to the console or to a file. The diagram below shows the steps of such a typical tracing activity, which could be called ad-hoc logging, because it is like enabling logging for particular functions without the need for adding log statements to the code.

The first step is to choose the function (or other events) to trace, and it is the most difficult one, because usually, we don’t know where to start – for example, all we might know is that there is no response for a request. This means that the collected traces (usually in text format) often contain no relevant information, and the process needs to be repeated for a different function. A possible way of scaling this approach is to trace more functions at once, but this would result in two issues:

  1. Traces are like logs, which means that it is very easy to get overwhelmed with the amount of data. It is possible to perform a text search, but any further processing would require data parsing.
  2. The amount of data might become so large, that either structures like function arguments, return values and message contents become truncated, or the messages end up queuing up because of the I/O bottleneck.

The exact limit of this approach depends on the individual case, but usually, a rule of thumb is that you can trace one typical module, and collect up to a few thousand traces. This is not enough for many applications, e.g. if the traced behaviour is a flaky test – especially if it fails rarely, or if the impact of trace collection makes it irreproducible.

Tracing with Erlang Doctor

Erlang Doctor is yet another tool built on top of the Erlang tracer, but it has an important advantage – by storing the traces in an ETS table, it reduces the impact on the traced system (by eliminating costly I/O operations), while opening up the possibility of further processing and analysis of the collected traces.

Erlang Doctor is yet another tool built on top of the Erlang tracer, but it has an important advantage – by storing the traces in an ETS table, it reduces the impact on the traced system (by eliminating costly I/O operations), while opening up the possibility of further processing and analysis of the collected traces.

Being no longer limited by the amount of produced text, it scales up to millions of collected traces, and the first limit you might hit is the system memory. Usually it is possible to trace all modules in an application (or even a few applications) at once, unless it is under heavy load. Thanks to the clear separation between data acquisition and analysis, this approach can be called ad-hoc instrumentation rather than logging. The whole process has to be repeated only in rare situations, e.g. if wrong application was traced. Of course tracing production nodes is always risky, and not recommended, unless very strict limits are set up in Erlang Doctor.

Getting Started

Erlang Doctor is available at https://github.com/chrzaszcz/erlang_doctor. For Elixir, there is https://github.com/chrzaszcz/ex_doctor, which is a minimal wrapper around Erlang Doctor. Both tools have Hex packages (erlang_doctor, ex_doctor). You have a few options for installation and running, depending on your use case:

  1. If you want it in your Erlang/Elixir shell right now, use the “firefighting snippets” provided in the Hex or GitHub docs. Because Erlang Doctor is just one module (and ExDoctor is two), you can simply download, compile, load and start the tool with a one-liner.
  2. For development, it is best to have it always at hand by initialising it in your ~/.erlang or ~/.iex.exs files. This way it will be available in all your interactive shells, e.g. rebar3 shell or iex -S mix.
  3. For easy access in your release, you can include it as a dependency of your project.

Basic usage

The following examples are in Erlang, and you can run them yourself – just clone erlang_doctor, compile it, and execute rebar3 as test shell. Detailed examples for both Erlang and Elixir are provided in the Hex Docs (erlang_doctor, ex_doctor). The first step is to start the tool:

1> tr:start().
{ok,<0.86.0>}

There is also tr:start/1 with additional options. For example, tr:start(#{limit => 10000}) would stop tracing, when there are 10 000 traces in the ETS table, which provides a safety valve against memory consumption.

Trace collection

Having started the Erlang Doctor, we can now trace selected modules – here we are using a test suite from Erlang Doctor itself:

2> tr:trace([tr_SUITE]).
ok

The tr:trace/1 function accepts a list of modules or {Module, Function, Arity} tuples. Alternatively, you can provide a map of options to trace specific processes or to enable message tracing. You can also trace entire applications, e.g. tr:trace_app(your_app) or tr:trace_apps([app1, app2]).

Let’s trace the following function call. It calculates the factorial recursively, and sleeps for 1 ms before each step:

3> tr_SUITE:sleepy_factorial(3).
6

It’s a good practice to stop tracing as soon as you don’t need it anymore:

4> tr:stop_tracing().
ok

Trace analysis

The collected traces are accumulated in an ETS table (default name: trace). They are stored as tr records, and to display them, we need to load the record definitions:

5> rr(tr).
[node,tr]

If you don’t have many traces, you can just list all of them:

6> tr:select().
[#tr{index = 1, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
     data = [3], ts = 1559134178217371, info = no_info},
 #tr{index = 2, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
     data = [2], ts = 1559134178219102, info = no_info},
 #tr{index = 3, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
     data = [1], ts = 1559134178221192, info = no_info},
 #tr{index = 4, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
     data = [0], ts = 1559134178223107, info = no_info},
 #tr{index = 5, pid = <0.175.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
     data = 1, ts = 1559134178225146, info = no_info},
 #tr{index = 6, pid = <0.175.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
     data = 1, ts = 1559134178225153, info = no_info},
 #tr{index = 7, pid = <0.175.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
     data = 2, ts = 1559134178225155, info = no_info},
 #tr{index = 8, pid = <0.175.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
     data = 6, ts = 1559134178225156, info = no_info}]

The index field is auto-incremented, and data contains an argument list or a return value, while ts is a timestamp in microseconds. To select specific fields of matching records, use tr:select/1, providing a selector function, which is passed to ets:fun2ms/1.

7> tr:select(fun(#tr{event = call, data = [N]}) -> N end).
[3, 2, 1, 0]

You can use tr:select/2 to further filter the results by searching for a specific term in data. In this simple example we search for the number 2:

8> tr:select(fun(T) -> T end, 2).
[#tr{index = 2, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
     data = [2], ts = 1705475521744690, info = no_info},
 #tr{index = 7, pid = <0.395.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
     data = 2, ts = 1705475521750454, info = no_info}]

This is powerful, as it searches all nested tuples, lists and maps, allowing you to search for arbitrary terms. For example, even if your code outputs something like “Unknown error”, you can pinpoint the originating function call. There is a similar function tr:filter/1, which filters all traces with a predicate function (this time not limited by fun2ms). In combination with tr:contains_data/2, you can get the same result as above:

9> Traces = tr:filter(fun(T) -> tr:contains_data(2, T) end).
[#tr{index = 2, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
     data = [2], ts = 1705475521744690, info = no_info},
 #tr{index = 7, pid = <0.395.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
     data = 2, ts = 1705475521750454, info = no_info}]


There is also tr:filter/2, which can be used to search in a different table than the current one – or in a list. As an example, let’s get only function calls from Traces returned by the previous call:

10> tr:filter(fun(#tr{event = call}) -> true end, Traces).
[#tr{index = 2, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
     data = [2], ts = 1705475521744690, info = no_info}]

To find the tracebacks (stack traces) for matching traces, use tr:tracebacks/1:

11> tr:tracebacks(fun(#tr{data = 1}) -> true end).
[[#tr{index = 3, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
      data = [1], ts = 1705475521746470, info = no_info},
  #tr{index = 2, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
      data = [2], ts = 1705475521744690, info = no_info},
  #tr{index = 1, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
      data = [3], ts = 1705475521743239, info = no_info}]]

Note, that by specifying data = 1, we are only matching return traces, as call traces always have a list in data. Only one traceback is returned, starting with a call that returned 1. What follows is the stack trace for this call. There was a second matching traceback, but it wasn’t shown, because whenever two tracebacks overlap, the longer one is skipped. You can change this with tr:tracebacks/2, providing #{output => all}) as the second argument. There are more options available, allowing you to specify the queried table/list, the output format, and the maximum amount of data returned. If you only need one traceback, you can call tr:traceback/1 or tr:traceback/2. Additionally, it is possible to pass a tr record (or an index) directly to tr:traceback/1.


To get a list of traces between each matching call and the corresponding return, use tr:ranges/1:

12> tr:ranges(fun(#tr{data = [1]}) -> true end).
[[#tr{index = 3, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
      data = [1], ts = 1705475521746470, info = no_info},
  #tr{index = 4, pid = <0.395.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
      data = [0], ts = 1705475521748499, info = no_info},
  #tr{index = 5, pid = <0.395.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
      data = 1, ts = 1705475521750451, info = no_info},
  #tr{index = 6, pid = <0.395.0>, event = return, mfa = {tr_SUITE,sleepy_factorial,1},
      data = 1, ts = 1705475521750453, info = no_info}]]

There is also tr:ranges/2 with options, allowing to set the queried table/list, and to limit the depth of nested traces. In particular, you can use #{max_depth => 1} to get only the top-level call and the corresponding return. If you only need the first range, use tr:range/1 or tr:range/2.

Last but not least, you can get a particular trace record with tr:lookup/1, and replay a particular function call with tr:do/1:

13> T = tr:lookup(1).
#tr{index = 1, pid = <0.175.0>, event = call, mfa = {tr_SUITE,sleepy_factorial,1},
    data = [3], ts = 1559134178217371, info = no_info}
14> tr:do(T).
6

This is useful e.g. for checking if a bug has been fixed without running the whole test suite, or to reproduce an issue while capturing further traces. This function can be called with an index as the argument: tr:do(1).

Quick profiling

Although there are dedicated profiling tools for Erlang, such as fprof and eprof, you can use Erlang Doctor to get a hint about possible bottlenecks and redundancies in your system with function call statistics. One of the advantages is that you already have the traces collected from your system, so you don’t need to trace again. Furthermore, tracing only specific modules gives you much simpler output, that you can easily read and process in your Erlang shell.

Call statistics

To get statistics of function call times, you can use tr:call_stat/1, providing a function that returns a key by which the traces will be aggregated. The simplest use case is to get the total number of calls and their time. To do this, we group all calls under one key, e.g. total:

15> tr:call_stat(fun(_) -> total end).
#{total => {4,7216,7216}}

The tuple {4,7216,7216} means that there were four calls in total with an accumulated time of 7216 microseconds, and the “own” time was also 7216 μs – this is the case because we have aggregated all traced functions. To see different values, let’s group the stats by the function argument:

16> tr:call_stat(fun(#tr{data = [N]}) -> N end).
#{0 => {1,1952,1952}, 1 => {1,3983,2031}, 2 => {1,5764,1781}, 3 => {1,7216,1452}}

Now it is apparent that although sleepy_factorial(3) took 7216 μs, only 1452 μs were spent in the function itself, and the remaining 5764 μs were spent in the nested calls. To filter out unwanted function calls, just add a guard:

17> tr:call_stat(fun(#tr{data = [N]}) when N < 3 -> N end).
#{0 => {1,1952,1952}, 1 => {1,3983,2031}, 2 => {1,5764,1781}}

There are additional utilities: tr:sorted_call_stat/1 and tr:print_sorted_call_stat/2, which gives you different output formats.

Call tree statistics

If your code is performing the same operations very often, it might be possible to optimise it. To detect such redundancies, you can use tr:top_call_trees/0, which detects complete call trees that repeat several times, where corresponding function calls and returns have the same arguments and return values, respectively. As an example, let’s trace a call to a function which calculates the 4th element of the Fibonacci sequence recursively. The trace table should be empty, so let’s clean it up first:

18> tr:clean().
ok
19> tr:trace([tr_SUITE]).
ok
20> tr_SUITE:fib(4).
3
21> tr:stop_tracing().
ok

Now it is possible to print the most time-consuming call trees that repeat at least twice:

22> tr:top_call_trees().
[{13, 2, #node{module = tr_SUITE,function = fib, args = [2],
               children = [#node{module = tr_SUITE, function = fib, args = [1],
                                 children = [], result = {return,1}},
                           #node{module = tr_SUITE, function = fib, args = [0],
                                 children = [], result = {return,0}}],
               result = {return,1}}},
 {5, 3, #node{module = tr_SUITE,function = fib, args = [1],
              children = [], result = {return,1}}}]

The resulting list contains tuples {Time, Count, Tree} where Time is the accumulated time (in microseconds) spent in the Tree, and Count is the number of times the tree repeated. The list is sorted by Time, descending. In the example, fib(2) was called twice, which already shows that the recursive implementation is suboptimal. You can see the two repeating subtrees in the call tree diagram:

The second listed tree consists only of fib(1), and it was called three times. There is also tr:top_call_trees/1 with options, allowing customisation of the output format – you can set the minimum number of repetitions, maximum number of presented trees etc.

ETS table manipulation

To get the current table name, use tr:tab/0:

23> tr:tab().
trace

To switch to a new table, use tr:set_tab/1. The table need not exist.

24> tr:set_tab(tmp).
ok

Now you can collect traces to the new table without changing the original one. You can dump the current table to file with tr:dump/1 – let’s dump the tmp table:

25> tr:dump("tmp.ets").
ok

In a new Erlang session, you can load the data with tr:load/1. This will set the current table name to tmp. Finally, you can remove all traces from the ETS table with tr:clean/0. To stop Erlang Doctor, just call tr:stop/0.

Summary

Now you have an additional utility in your Erlang/Elixir toolbox, which you can try out whenever you need to debug an issue or learn about unknown or unexpected system behaviour. Just remember to be extremely cautious when using it in a production environment. If you have any feedback, please provide it on GitHub, and if you like the tool, consider giving it a star.

The post Guess Less with Erlang Doctor appeared first on Erlang Solutions.

by Pawel Chrzaszcz at March 18, 2024 16:11

March 15, 2024

Ignite Realtime Blog

Openfire inVerse plugin version 10.1.7.1 released!

We have made available a new version of the inVerse plugin for Openfire! This plugin allows you to easily deploy the third-party Converse client in Openfire. In this release, the version of the client that is bundled in the plugin is updated to 10.1.7.

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page.

For other release announcements and news follow us on Mastodon or X

1 post - 1 participant

Read full topic

by guus at March 15, 2024 12:55

March 14, 2024

Erlang Solutions

gen_statem Unveiled

gen_statem and protocols

This blog post is a deep dive into some of the concepts discussed in my recent conference talk at FOSDEM. The presentation explored some basic theoretical concepts of Finite State Machines, and some special powers of Erlang’s gen_statem in the context of protocols and event-driven development, and building upon this insight, this post delves into harnessing the capabilities of the gen_statem behaviour. Let’s jump straight into it!

Protocols

The word protocol comes from the Greek “πρωτόκολλον”, from πρῶτος (prôtos, “first”) + κόλλα (kólla, “glue”), used in Byzantine greek as the first sheet of a papyrus-roll, bearing the official authentication and date of manufacture of the papyrus. Over time, the word describing the first page became a synecdoche for the entire document.

The word protocol was then used primarily to refer to diplomatic or political treaties until the field of Information Technology overloaded the word to describe “treaties” too, but between machines, which as in diplomacy, governs the manner of communication between two entities. As the entities communicate, a given entity receives messages describing the interactions that peers are establishing with it, creating a model where an entity reacts to events.

In this field of Technology, so much of the job of a programmer is implementing such communication protocol, which reacts to events. The protocol defines the valid messages and the valid order, and any side effects an event might have. You know many such protocols: TCP, TLS, HTTP, or XMPP, just to name some good old classics.

The event queue

As a BEAM programmer, implementing such an event-driven program is an archetypical paradigm you’re well familiar with: you have a process, which has a mailbox, and the process reacts to these messages one by one. It is the actor model in a nutshell: an actor can, in response to a message it receives:

  • send a finite number of messages to other Actors;
  • create a finite number of new Actors;
  • designate the behaviour to be used for the next message it receives.

It is ubiquitous to implement such actors as a gen_server, but, pay attention to the last point: designate the behaviour to be used for the next message it receives. When a given event (a message) implies information about how the next event should be processed, there is implicitly a transformation of the process state. What you have is a State Machine in disguise.

Finite State Machines

Finite State Machines (FSM for short) are a function 𝛿 of an input state and an input event, to an output state where the function can be applied again. This is the idea of the actor receiving a message and designating the behaviour for the next: it chooses the state that will be input together with the next event.

FSMs can also define output, in such cases they are called Finite State Transducers (FST for short, often simplified to FSMs too), and their definition adds another alphabet for output symbols, and the function 𝛿 that defines the machine does return the next state together with the next output symbol for the current input.

gen_statem

When the function’s input is the current state and an input symbol, and the output is a new state and a new output symbol, we have a Mealy machine. And when the output alphabet of one machine is the input alphabet of another, we can then intuitively compose them. This is the pattern that gen_statem implements.

gen_statem has three important features that are easily overlooked, taking the best of pure Erlang programming and state machine modelling: it can simulate selective receives, offers an extended mailbox, and allows for complex data structures as the FSM state.

Selective receives

Imagine the archetypical example of an FSM, a light switch. The switch is for example digital and translates requests to a fancy light-set using an analogous cable protocol. The code you’ll need to implement will look something like the following:

handle_call(on, _From, {off, Light}) ->
    on = request(on, Light),
    {reply, on, {on, Light}};
handle_call(off, _From, {on, Light}) ->
    off = request(off, Light),
    {reply, off, {off, Light}};
handle_call(on, _From, {on, Light}) ->
    {reply, on, {on, Light}};
handle_call(off, _From, {off, Light}) ->
    {reply, off, {off, Light}}.

But now imagine the light request was to be asynchronous, now your code would look like the following:

handle_call(on, From, {off, undefined, Light}) ->
    Ref = request(on, Light),
    {noreply, {off, {on, Ref, From}, Light}};
handle_call(off, From, {on, undefined, Light}) ->
    Ref = request(off, Light),
    {noreply, {on, {off, Ref, From}, Light}};

handle_call(off, _From, {on, {off, _, _}, Light} = State) ->
    {reply, turning_off, State};  %% ???
handle_call(on, _From, {off, {on, _, _}, Light} = State) ->
    {reply, turning_on, State}; %% ???
handle_call(off, _From, {off, {on, _, _}, Light} = State) ->
    {reply, turning_on_wait, State};  %% ???
handle_call(on, _From, {on, {off, _, _}, Light} = State) ->
    {reply, turning_off_wait, State}; %% ???

handle_info(Ref, {State, {Request, Ref, From}, Light}) ->
    gen_server:reply(From, Request),
    {noreply, {Request, undefined, Light}}.

The problem is, that now the order of events is not defined, and reorderings of the user requesting a switch and the light system announcing finalising the request are possible, so you need to handle these cases. When the switch and the light system had only two states each, you had to design and write four new cases: the number of new cases grows by multiplying the number of cases on each side. And each case is a computation of the previous cases, effectively creating a user-level callstack.

So we now try migrating the code to a properly explicit state machine, as follows:

off({call, From}, off, {undefined, Light}) ->
    {keep_state_and_data, [{reply, From, off}]};
off({call, From}, on, {undefined, Light}) ->
    Ref = request(on, Light),
    {keep_state, {{Ref, From}, Light}, []};
off({call, From}, _, _) ->
    {keep_state_and_data, [postpone]};
off(info, {Ref, Response}, {{Ref, From}, Light}) ->
    {next_state, Response, {undefined, Light}, [{reply, From, Response}]}.

on({call, From}, on, {undefined, Light}) ->
    {keep_state_and_data, [{reply, From, on}]};
on({call, From}, off, {undefined, Light}) ->
    Ref = request(off, Light),
    {keep_state, {{Ref, From}, Light}, []};
on({call, From}, _, _) ->
    {keep_state_and_data, [postpone]};
on(info, {Ref, Response}, {{Ref, From}, Light}) ->
    {next_state, Response, {undefined, Light}, [{reply, From, Response}]}.

Now the key lies in postponing requests: this is akin to Erlang’s selective receive clauses, where the mailbox is explored until a matching message is found. Events that arrive out of order can this way be treated when the order is right.

This is an important difference between how we learn to program in pure Erlang, with the power of selective receives where we chose which message to handle, and how we learn to program in OTP, where generic behaviours like gen_server force us to handle the first message always, but in different clauses depending on the semantics of the message (handle_cast, handle_call and handle_info). With the power to postpone a message, we effectively choose which message to handle without being constrained with the code location.

This section is inspired really by Ulf Wiger’s fantastic talk, Death by Accidental Complexity. So if you’ve known of the challenge he explained, this section hopefully serves as a solution to you.

Complex Data Structures

This was much explained in the previous blog on state machines, by using gen_statem’s handle_event_function callback, apart from all the advantages explained in the aforementioned blog, we can also reduce the implementation of 𝛿 to a single function called handle_event, which makes the previous code take advantage of a lot of code reuse, see the following equivalent state machine:

handle_event({call, From}, State, State, {undefined, Light}) ->
    {keep_state_and_data, [{reply, From, State}]};
handle_event({call, From}, Request, State, {undefined, Light}) ->
    Ref = request(Request, Light),
    {keep_state, {{Ref, From}, Light}, []};
handle_event({call, _}, _, _, _) ->
    {keep_state_and_data, [postpone]};
handle_event(info, {Ref, Response}, State, {{Ref, From}, Light}) ->
    {next_state, Response, {undefined, Light}, [{reply, From, Response}]}.

This section was extensively described in the previous blog post so to learn more about it, please enjoy your read!

An extended mailbox

We saw that the function 𝛿 of the FSM in question is called when a new event is triggered. In implementing a protocol, this is modelled by messages to the actor’s mailbox. In a pure FSM, a message that has no meaning within a state would crash the process, but in practice, while the order of messages is not defined, it might be a valid computation to postpone them and process them when we reach the right state.

This is what a selective receive would do, by exploring the mailbox and looking for the right message to handle for the current state. In OTP, the general practice is to leave the lower-level communication abstractions to the underlying language features, and code in a higher and more sequential style as defined by the generic behaviours: in gen_statem, we have an extended view of the FSM’s event queue.

There are two more things to notice we can do with gen_statem actions: one is to insert ad-hoc events with the construct {next_event, EventType, EventContent}, and the other is to insert timeouts, which can be restarted automatically on any new event, any state change, or not at all. These seem like different event queues for our eventful state machine, together with the process’s mailbox, but really it is only one queue we can see as an extended mailbox.

The mental picture is as follows: There is only one event queue, which is an extension of the process mailbox, and this queue has got three pointers:

  • A head pointing at the oldest event;
  • A current pointing at the next event to be processed.
  • A tail pointing at the youngest event;

This model is meant to be practically identical to how the process mailbox is perceived.

  • postpone causes the current position to move to its next younger event, so the previous current position is still in the queue reachable from head.
  • Not postponing an event i.e consuming it causes the event to be removed from the queue and current position to move to its next younger event.
  • NewState =/= State causes the current position to be set to head i.e the oldest event.
  • next_event inserts event(s) at the current position i.e as just older than the previous current position.
  • {timeout, 0, Msg} inserts a timeout, Msg event after tail i.e as the new youngest received event.

Let’s see the event queue in pictures:


handle_event(Type1, Content1, State1, Data) ->
{keep_state_and_data, [postpone]};
When the first event to process is 1, after any necessary logic we might decide to postpone it. In such case, the event remains in the queue, reachable from HEAD, but Current is moved to the next event in the queue, event 2.

handle_event(Type1, Content1, State1, Data) ->
{keep_state_and_data, [postpone]};
...
handle_event(Type2, Content2, State1, Data) ->
{next_state, State2};
When handling event 2, after any necessary logic, we decide to transition to a new state. In this case, 2 is removed from the queue, as it has been processed, and Current is moved to HEAD, which points again to 1, as the state is now a new one.

handle_event(Type1, Content1, State1, Data) ->
{keep_state_and_data, [postpone]};
...
handle_event(Type2, Content2, State1, Data) ->
{next_state, State2};
...
handle_event(Type1, Content1, State2, Data) ->
{keep_state_and_data, [{next_event, TypeA, ContentA}]};
After any necessary handling for 1, we now decide to insert a next_event called A. Then 1 is dropped from the queue, and A is inserted at the point where Current was pointing. HEAD is also updated to the next event after 1, which in this case is now A.
handle_event(Type1, Content1, State1, Data) ->
{keep_state_and_data, [postpone]};
...
handle_event(Type2, Content2, State1, Data) ->
{next_state, State2};
...
handle_event(Type1, Content1, State2, Data) ->
{keep_state_and_data, [{next_event, TypeA, ContentA}]};
...
handle_event(TypeA, ContentA, State2, Data) ->
{keep_state_and_data, [postpone]};

Now we decide to postpone A, so Current is moved to the next event in the queue, 3.

handle_event(Type1, Content1, State1, Data) ->
{keep_state_and_data, [postpone]};
...
handle_event(Type2, Content2, State1, Data) ->
{next_state, State2};
...
handle_event(Type1, Content1, State2, Data) ->
{keep_state_and_data, [{next_event, TypeA, ContentA}]};
...
handle_event(TypeA, ContentA, State2, Data) ->
{keep_state_and_data, [postpone]};
...
handle_event(Type3, Content3, State2, Data) ->
keep_state_and_data;
3 is processed normally, and then dropped from the queue. No other event is inserted nor postponed, so Current is simply moved to the next, 4.



handle_event(Type1, Content1, State1, Data) ->
{keep_state_and_data, [postpone]};
...
handle_event(Type2, Content2, State1, Data) ->
{next_state, State2};
...
handle_event(Type1, Content1, State2, Data) ->
{keep_state_and_data, [{next_event, TypeA, ContentA}]};
...
handle_event(TypeA, ContentA, State2, Data) ->
{keep_state_and_data, [postpone]};
...
handle_event(Type3, Content3, State2, Data) ->
keep_state_and_data;
...
handle_event(Type4, Content4, State2, Data) ->
{keep_state_and_data,
     [postpone, {next_event, TypeB, ContentB}]};
And 4 is now postponed, and a new event B is inserted, so while HEAD still remains pointing at A, 4 is kept in the queue and Current will now point to the newly inserted event B.

This section is in turn inspired by this comment on GitHub.

Conclusions

We’ve seen how protocols are governances over the manners of communication between two entities and that these governances define how messages are transmitted, and processed, and how they relate to each other. We’ve seen how the third clause of the actor model dictates that an actor can designate the behaviour to be used for the next message it receives and how this essentially defines the 𝛿 function of a state machine and that Erlang’s gen_statem behaviour is an FSM engine with a lot of power over the event queue and the state data structures.

Do you have protocol implementations that have suffered from extensibility problems? Have you had to handle an exploding number of cases to implement when the event order might reorder in any possible way? If you’ve suffered from death by accidental complexity, or your code has suffered from state machines in disguise, or your testing isn’t comprehensive enough by default, the tricks and points of view of this post should help you get started, and we can always help you keep moving forward!

The post gen_statem Unveiled appeared first on Erlang Solutions.

by Nelson Vides at March 14, 2024 09:21

March 12, 2024

JMP

Newsletter: eSIM Adapter (and Google Play Fun)

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

eSIM Adapter

This month we’re pleased to announce the existence of the JMP eSIM Adapter. This is a device that acts exactly like a SIM card and will work in any device that accepts a SIM card (phone, tablet, hotspot, Rocket Stick), but the credentials it offers come from eSIMs provided by the user. With the adapter, you can use eSIMs from any provider in any device, regardless of whether the device or OS support eSIM. It also means you can move all your eSIMs between devices easily and conveniently. It’s the best of both worlds: the convenience of downloading eSIMs along with the flexibility of moving them between devices and using them on any device.

So how are eSIMs downloaded and written to the device in order to use them? The easiest and most convenient way will be the official Android app, which will of course be freedomware and available in F-droid soon. The app is developed by PeterCxy of OpenEUICC fame. If you have an OS that bundles OpenEUICC, it will also work for writing eSIMs to the adapter. The app is not required to use the adapter, and swapping the adapter into another device will work fine. What if you want to switch eSIMs without putting the card back into an Android device? No problem; as long as your other device supports the standard SIM Toolkit menus, you will be able to switch eSIMs on the fly.

What if you don’t have an Android device at all? No problem, there are a few other options for writing eSIMs to the adapter. You can get a PC/SC reader device (about $20 on Amazon for example) and then use a tool such as lpac to download and write eSIMs to the adapter from your PC. Some other cell modems may also be supported by lpac directly. Finally, there is work in progress on an optional tool that will be able to use a server (optionally self-hosted) to facilitate downloading eSIMs with just the SIM Toolkit menus.

There is a very limited supply of these devices available for testing now, so if you’re interested, or just have questions, swing by the chatroom (below) and let us know. We expect full retail roll-out to happen in Q2.

Cheogram Android

Cheogram Android saw a major new release this month, 2.13.4-1 includes a visual refresh, many fixes, and some features including:

  • Allow locally muting channel participants
  • Allow setting subject on messages and threads
  • Display list of recent threads in channel details
  • Support full channel configuration form for owners
  • Register with channel when joining, deregister when leaving (where supported)
  • Expert setting to choose voice message codec

Is My Contact List Uploaded?

Cheogram Android has always included optional features for integrating with your local Android contacts (if you give permission). If you add a Jabber ID to an Android contact, their name and image are displayed in the app. Additionally, if you use a PSTN gateway (such as cheogram.com, which JMP acts as a plugin for) all your contacts with phone numbers are displayed in the app, making it easy to message or call them via the gateway. This is all done locally and no information is uploaded anywhere as part of this feature.

Unfortunately, Google does not believe us. From speaking with developers of similar apps, it seems Google no longer believe anyone who has access to the device contacts is not uploading them somewhere. So, starting with this release, Cheogram Android from the Play Store says when asking for contact permission that contacts are uploaded. Not because they are, but because Google requires that we say so. The app’s privacy policy also says contacts are uploaded; again, only because Google requires that it say this without regard for whether it is true.

Can any of your contacts be exposed to your server? Of course. If you choose to send a message or make a call, part of the message or call’s metadata will transit your server, so the server could become aware of that one contact. Similarly, if you view the contact’s details, the server may be asked whether it knows anything about this contact. And finally, if you tap the “Add Contact” button in the app to save this contact to your server-side list, that one contact is saved server-side. Unfortunately, spelling out all these different cases did not appease Google, who insisted we must say that we “upload the contact list to the server” in exactly those words. So, those words now appear.

Thanks for Reading

The team is growing! This month we welcome SavagePeanut to the team to help out with development.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at March 12, 2024 20:31

March 11, 2024

ProcessOne

Matrix gateway setup with ejabberd

As of version 24.02, ejabberd is shipped with a Matrix gateway and can participate in the Matrix
federation
. This means that an XMPP client can exchange messages with Matrix users or rooms.

Let’s see how to configure your ejabberd to enable this gateway.

Configuration in ejabberd

HTTPS listener

First, add an HTTP handler, as Matrix uses HTTPS for Server-Server API.

In the listen section of your ejabberd.yml configuration file, add a handler on Matrix port 8448 for path /_matrix that calls the mod_matrix_gw module. You must enable TLS on this port to accept HTTPS connections (unless a proxy already handles HTTPS in front of ejabberd) and provide a valid certificate for your Matrix domain (see matrix_domain below). You can set this certificate using the certfile option of the listener, like in the example below, or listing it in the certfiles top level option.

Example:

listen:
  -
    port: 5222
    module: ejabberd_c2s
  -
    port: 8448 # Matrix federation
    module: ejabberd_http
    tls: true
    certfile: "/opt/ejabberd/conf/matrix.pem"
    request_handlers:
      "/_matrix": mod_matrix_gw

If you want to use a non-standard port instead of 8448, you must serve a /.well-known/matrix/server on your Matrix domain (see below).

Server-to-Server

You must enable s2s (Server-to-Server federation) by setting an access rule all or allow on s2s_access top level option:

Example:

s2s_access: s2s

access_rules:
  local:
    - allow: local
  c2s:
    - deny: blocked
    - allow
  s2s:
    - allow # to allow Matrix federation

Matrix gateway module

Finally, add mod_matrix_gw module in the modules list.

Example:

modules:
  mod_matrix_gw:
    matrix_domain: "matrixdomain.com"
    key_name: "key1"
    key: "SU4mu/j8b8A1i1EdyxIcKlFlrp+eSRBIlZwGyHP7Mfo="

matrix_domain

Replace matrixdomain.com with your Matrix domain. That domain must resolve to your ejabberd server or serve a file https://matrixdomain.com/.well-known/matrix/server that contains a JSON file with the address and Matrix port (as defined by the Matrix HTTPS handler, see above) of your ejabberd server:

Example:

{
   "m.server": "ejabberddomain.com:8448"
}

key_name & key

The key_name is arbitrary. The key value is your base64-encoded ed25519 Matrix signing key. It can be generated by Matrix tools or in an Erlang shell using the command base64:encode(element(2, crypto:generate_key(eddsa, ed25519))).:

Example:

$ erl
Erlang/OTP 24 [erts-12.3.1] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [dtrace]

Eshell V12.3.1 (abort with ^G)
1> base64:encode(element(2, crypto:generate_key(eddsa, ed25519))).
<<"SU4mu/j8b8A1i1EdyxIcKlFlrp+eSRBIlZwGyHP7Mfo=">>
2> q().
ok

Once your configuration is ready, you can restart ejabberd.

Testing

To check if your setup is correct, go to the following page and enter your Matrix domain (as set by the matrix_domain option):
https://federationtester.matrix.org/

This page should list any problem related to Matrix on your ejabberd installation.

Routing

What messages are routed to an external Matrix server?

Implicit routing

Let’s say an XMPP client connected to your ejabberd server sends a message to a JID user1@domain1.com. If domain1.com is defined by the hosts parameter of your ejabberd server (i.e. it’s one of your XMPP domains), the message will be routed locally. If it’s not, ejabberd will try to establish an XMPP Server-to-Server connection to a remote domain1.com XMPP server. If this fails (i.e. there is no such external domain1.com XMPP domain), then ejabberd will try on the Matrix federation, transforming the user1@domain1.com JID into the Matrix ID @user1:domain1.com and will try to open a connection to a remote domain1.com Matrix domain.

Explicit routing

It is also possible to route messages explicitly to the Matrix federation by setting the option matrix_id_as_jid in the mod_matrix_gw module to true:

Example:

modules:
  mod_matrix_gw:
    host: "matrix.@HOST@"
    matrix_domain: "matrixdomain.com"
    key_name: "key1"
    key: "SU4mu/j8b8A1i1EdyxIcKlFlrp+eSRBIlZwGyHP7Mfo="
    matrix_id_as_jid: true

In this case, the automatic fallback to Matrix when XMPP s2s fails is disabled and messages must be explicitly sent to the matrix gateway service Jabber ID to be routed to a remote Matrix server.

To send a message to the Matrix user @user:remotedomain.com, the XMPP client must send a message to the JID user%remotedomain.com@matrix.xmppdomain.com, where matrix.xmppdomain.com is the JID of the gateway service as set by the host option of the mod_matrix_gw module (the keyword @HOST@ is replaced with the XMPP domain of the server). If host is not set, the Matrix gateway JID is your XMPP domain with the matrix. prefix added.

The default value for matrix_id_as_jid is false, so the implicit routing will be used if this option is not set.

The post Matrix gateway setup with ejabberd first appeared on ProcessOne.

by Jérôme Sautret at March 11, 2024 09:48

XMPP Providers

XMPP Providers Chat

Come Join Us

After automating almost all the work necessary for querying provider properties, we were able to start adding new providers! There are 59 providers on the list at this moment 🎉 which enables us to show some statistics as well.

For most of the questions regarding the project (how properties can be provided by server admins and so on), the FAQ section provides you with answers.

However, if your questions were not answered there, we invite you to join our new chat (join directly). Feel free to discuss XMPP Providers related topics!

Spread the Word

The project lives from the community and client implementations, so follow us and spread the word!

XMPP Providers Logo

XMPP Providers Logo

by map[name:XMPP Providers Team] at March 11, 2024 00:00

March 07, 2024

Erlang Solutions

Harnessing your tech stack for a competitive Fintech advantage

Modern financial services must be based on a solid technical foundation to deliver the user experiences and business reliability needed for commercial success.

The role of the underlying technology is critical in enabling this success in fintech in building customer trust- guaranteeing operational resilience and optimal availability of fintech systems and creating exceptional user experience in the development of features facilitated by a tech stack that just works.

These principles are core to the Erlang ecosystem, including the Elixir programming language and the powerful BEAM VM (or virtual machine). 

In this article, we will dive deeper into how your choice of tech stack impacts your business outcomes and why Erlang and Elixir will often be the right tools for the job in fintech.

Favourable qualities of a fintech system

Here are some of the non-negotiables of a tech stack if you are involved in a fintech development project.

A seamless customer experience

Many projects fall short in the fintech space due to focusing only on an application’s user interface. This short-sighted view doesn’t consider the knock-on effects of predicted (user growth) or unpredicted changes (like the pandemic lockdowns). 

For instance, your slick customer interface loses a lot of its shine when it’s connected to a legacy backend that is sluggish in responding to requests. 

When it comes to modern financial services, customers expect real-time, seamless and intelligent services and not clunky experiences. To ensure you deliver this, you need predictable behaviour under heavy loads and during usage spikes, resilience and fault tolerance without the associated costs skyrocketing.

Technology that enables business agility

Financial services is a fast-moving industry, with revenues expected to grow $245million to $1.5 trillion by 2030. To make the most of emerging opportunities, whether as an incumbent or a fintech-led startup, you need to be agile from a business perspective, which is bound to tech agility. 

With the adoption of open-source technology on the rise in FS, we’re starting to see the benefits of moving away from proprietary tech, the risk of vendor lock-in, and the obstacles that can create. When you can dedicate more resources to shipping code without being constrained and forced into unfavourable trade-offs, you’re better positioned to cash in on opportunities before your competitors.

You want a system that is easy to maintain and understand; this will help with onboarding new developers to the team and focusing resources where they can make the most impact.

Tech stacks that use fewer resources

Designing for sustainability is a key consideration for any business, especially in an industry under the microscope like financial services. The software industry is responsible for a high level of carbon usage, and shareholders and investors are now weighing this up when making final investment decisions. 

As funding in the tech space tightens, this is something that business leaders need to be aware of as a part of their tech decision-making strategy.

CTOs and architects can help by making better choices in technology. 

For example, using a BEAM-based language can reduce the physical infrastructure requiring just one-tenth of the servers. That leads to significant cost reductions and considerable savings in terms of carbon footprint.  

System reliability and availability

A robust operational resiliency strategy strengthens your business case in financial services. 

We’ve learnt from the stress placed on systems caused by spikes in online commerce since the pandemic that using technologies that are proven and built to deal with the unpredictability of the modern world is critical.

One thing sure to damage any FS player, regardless of size, is high-profile system outages and downtime. This can cause severe reputation damage and attract hefty fines from regulators.

According to Gartner, the average cost of IT downtime is around $5,600 per minute. That is about $300,000 per hour on average. So avoiding downtime in your fintech production system is mission-critical.

How Erlang and Elixir meet fintech challenges

Erlang is a programming language designed to build massively scalable, soft real-time systems that require high availability. Elixir, runs on the BEAM VM, the same virtual machine as Erlang, and can be adopted throughout the tech stack. Many of the world’s largest banking, e-commerce and fintech companies depend on these technologies to power their tech stacks, such as Klarna, SumUp and SolarisBank.

Key attributes of Erlang/Elixir for fintech development:

  • Can handle a huge number of concurrent activities
  • Ideal for when actions must be performed at a certain point in time or within a specific time (soft real-time)
  • Benefits of system distribution
  • Ideal for massive software systems
  • Software maintenance without stopping the system
  • Built-in fault tolerance and reliability

If you’re in the early stages of your fintech project, you may not need these capabilities right away, but trying to retrofit them later can cost you valuable time and resources. Our expert team has helped many teams adopt BEAM-based technology at various times in their business lifecycle.  

Now let’s look at how Erlang/Elixir and the BEAM VM deliver against the desirable fintech characteristics outlined in the previous section.

System availability and resilience during unpredicted events

Functional Programming helps developers to write reliable software. Using the BEAM VM means you can have the reliability of up to ‘nine-nines’ (99.9999999%) – that’s almost zero downtime for your system, obviously very desirable in any fintech system.

This comes from Erlang/Elixir systems having ‘no single point of failure’ that risks bringing down your entire system. The ‘actor model’ (where parallel processes communicate with each other via messages) crucially does not have shared memory, so errors that inevitably will occur are localised and will not impact the rest of your system.

The actor model

Fault tolerance is another crucial aspect of Erlang/Elixir systems, making them a good option for your fintech project. ‘Supervisors’ are programmed with instructions on how to restart parts of a system when things do fail. This involves going back to a known initial state that is guaranteed to work:

The result is that using Erlang/Elixir means that your system will achieve unrivalled availability with far less effort and resources than other programming languages.

System scalability for growing demand

Along with unmatched system uptime and availability, Erlang and Elixir offer scalability that makes your system able to handle changes in demand and sudden spikes in traffic. This is possible without many of the difficulties of trying to scale with other programming languages.

With Erlang/Elixir, your code allows thousands of ‘processes’ to run concurrently on the same machine – in other words, you are making the most of each machine’s resources (vertical scaling). 

These processes are distributed, meaning they can communicate with processes on other machines within the network enabling developers to coordinate work across multiple nodes (horizontal scaling).

In the fintech startup space especially, having confidence that if you achieve dramatic levels of fast growth, your tech system will stand up to demand and not require a costly rewrite of the codebase can be a critical factor in maintaining momentum.

Concurrency model for high-volume transactional systems

Concurrent Programming makes it appear like multiple sequences of commands are being executed in parallel. Erlang and Elixir are ideal for workloads that involve a considerable amount of concurrency, such as transaction-intensive segments of financial services like payments and trading.  

The functional nature of Erlang/Elixir, plus the lightweight nature of how the BEAM executes processes, makes writing concurrent programs far more straightforward than with other languages.

If your fintech project expects to process massive amounts of transactional data from different sources, then Erlang/Elixir could be the most frictionless way for your team to go about building it.

Developer friendly

There are many reasons why developers enjoy working with Erlang/Elixir.

OTP middleware (the secret sauce behind Erlang/Elixir) abstracts the technical difficulty of concurrency and handling system failures. It allows your tech team the space to focus on business logic instead of time-consuming computational plumbing. 

Speed to market is a crucial differentiator in the competitive fintech landscape, with Erlang/Elixir you can release new features in time to attract new customers and retain existing ones better than with many other languages.

Because using Erlang/Elixir for your project means less code and a lightweight execution model of processes demanding fewer CPU resources, you will need fewer servers, reducing your energy consumption and infrastructure costs. 

Using Erlang and Elixir in fintech

What is needed for success in fintech is a real-time, secure, reliable and scalable system that is easy to maintain and cost-efficient. Furthermore, you need a stack that lets your developers ship code and release new products and features quickly. The Erlang Ecosystem (Erlang, Elixir and the BEAM VM) sets a solid foundation for your fintech startup or project to be successful. 

With a reliable, easy-to-maintain code base, your most valuable resource (your tech talent) will be freed up to concentrate on delivering value and competitive advantage that delivers to your bottom line.

With the right financial products to market and an Erlang/Elixir backend, you can be confident in delivering smooth and fast end-user experiences that are always available and seamless. 

The post Harnessing your tech stack for a competitive Fintech advantage appeared first on Erlang Solutions.

by Content Team at March 07, 2024 08:30

March 04, 2024

Ignite Realtime Blog

Openfire 4.8.1 Release

The Ignite Realtime Community is pleased to announce the release of Openfire 4.8.1. This release addresses a number of issues found with the major 4.8.0 release a few months back.

Interested in getting started? You can download installers of Openfire here . Our documentation contains an upgrade guide that helps you update from an older version.

sha256sum checksum values for the release artefacts are as follows

2ff28c5d7ff97305b2d6572e60b02f3708e86750d959459d7c5d6e17d4f9f932  openfire-4.8.1-1.noarch.rpm
f622719e4dbd43aadc9434ba4ebc0d8c65ec30dd25a7d2e99c7de33006a24f56  openfire_4.8.1_all.deb
3507b5d64c961daf526a52a73baaac7c84af12eb0115b961c2f95039255aec57  openfire_4_8_1.dmg
141f6eaf374dfb7c4cca345e1b598fed5ce3af9c70062a8cc0d9571e15c29c7d  openfire_4_8_1.exe
c6f0cf25a2d10acd6c02239ad59ab5954da5a4b541bc19949bd381fefb856da1  openfire_4_8_1.tar.gz
bec5b03ed56146fec2f84593c7e7b269ee5c32b3a0d5f9e175bd41f28a853abe  openfire_4_8_1_x64.exe
7403113b701aaf8a37dcd2d7e22fbb133161d322ad74505c95e54eaf6533f183  openfire_4_8_1.zip

For other release announcements and news follow us on Mastodon or X

1 post - 1 participant

Read full topic

by akrherz at March 04, 2024 15:57

February 29, 2024

Isode

Cobalt 1.5 – New Capabilities

Overview

This release adds new functionality and features to Cobalt, our web based role and user provisioning tool. You can find out more about Cobalt here.

Multiple Cobalt Servers

This enhancement enables multiple Cobalt servers to be run against a single directory. There are two reasons for this.

  1. In a distributed environment it is useful to have multiple Cobalt servers at different locations, each connected to the local node of a multi-master directory.
  2. Where a read only directory is replicated, for example using Sodium Sync to a Mobile Unit, it is useful to run Cobalt (read only) against the replica, to allow local administrators to conveniently view the configuration using Cobalt.

Password Management and Password Policy

This update includes a number of enhancements relating to password management:

  1. Cobalt is now aware of password policy. A key change is that after administrator creation or change of password, when password policy requires user change, Cobalt will mark the password as requiring user change. To be useful in deployment, the applications used also need to be password policy aware.
  2. Cobalt added a user UI to enable password change/reset, to complement Administrator password change.
  3. Administrator option to email new password to user.

Security Management

  1. Directory Access Rights Management. M-Vault Directory Groups enable specification of user rights, to directory and messaging configuration in the directory. This can be configured by Cobalt by domain administrators.
  2. Certificate expiry checking. When managing a directory holding many certificates, it is important to keep them up to date. Cobalt provides a tool which can be run at intervals to determine certificates which have expired and certificates which will expire soon.

User Directory Viewer

Cobalt’s primary purpose is directory administration. This update adds a complementary tool which enables users to access information in the directory managed by Cobalt. This uses anonymous access for user convenience.

Miscellaneous

  1. Flexible Search. Cobalt administrators have the option to configure search fields available for users. Configuration is per-domain.
  2. Users, Roles and mailing list members now sorted alphabetically.
  3. Base DN can be specified for users for a domain. If specified, Cobalt allows browsing users under this DIT (entry) using subtree search. Add user operation is disabled if this is specified. This allows Cobalt to:
    1. Utilize User provision by other means, for reference from within Cobalt managed components.
    2. To modify the entries, but does not allow addition of new entries.

by admin at February 29, 2024 13:18

Erlang Solutions

Blockchain Tech Deep Dive | 6 Principles

Blockchain technology is transforming nearly every industry, whether banking,  government, fashion or logistics. The benefits of using blockchain are substantial. Businesses can lower transaction costs, free up capital, speed up processes, and enhance security and trust.

We’re mapping out the six key principles for blockchain integration success, so businesses can navigate the challenges and opportunities this disruptive technology presents while driving innovation and competition.

Blockchain – an overview

The world is becoming decentralised. 

Many platforms, technologies, and services are moving from centralised proprietary systems to decentralised, open ones.

This presents the perfect opportunity for businesses to step up and create fundamental change in how their data is managed; from where every company has its copy of a data set to a scenario in which all parties in a network have controlled access to a shared copy.

The key benefit of this is that traditional independent institutions can collaboratively work together to integrate and optimise existing processes to mutual advantage while, crucially, not compromising on the security of sensitive data.

Software engineering excellence

If you haven’t already, it’s time for your business to embrace the change, particularly in software engineering, by diving into the world of functional programming and the design patterns it spawns. Develop and refine code that accelerates the software development process, ensuring its smooth evolution and adaptation to meet critical time-to-market demands—crucial elements, especially in the realm of blockchain.

It’s also important to adopt a contemporary approach to testing, preserving a high level of quality throughout your system’s lifecycle. This can be achieved by making use of auto-generated Property-Based Tests and continuous stress tests, coupled with traditional Test-Driven Development, to ensure robustness.

Furthermore, enable your software engineers to embrace modern agile software development methodologies that support workforce scalability as needed. Incorporating agile practices such as deployment automation, type checks, intuitive naming conventions, and comprehensive documentation is paramount, particularly during handovers or when onboarding new developers.

Building resilient networks

When delving into networking for blockchain projects, it’s crucial to assemble a team well-versed in managing automated network traffic and dynamic topologies. Look for experts who bring a wealth of experience in this area and ensure they possess the monitoring capabilities needed for seamless adaptation to evolving scenarios, enabling proactive problem-solving.

Additionally, it’s essential to inquire whether their approach incorporates back-pressure control mechanisms to safeguard the system’s capacity against potential overloads, enhancing its overall robustness. These contemporary strategies apply across various network architectures, including centralised, decentralised, and distributed peer-to-peer (p2p) networks, often accompanied by specialised service discovery mechanisms.

Peer-to-Peer blockchain networks

Engaging in clear communication with your team from the outset about these considerations can significantly streamline the development process.

Collaboration

Collaborating with seasoned engineers is vital in the creation of massively scalable systems. Their expertise is unmatched in developing messaging systems or distributed databases, which instils confidence that they will make informed decisions regarding partitioning, sharding, and replica parameters.

The demand for engineering highly scalable and distributed systems has become increasingly pronounced, a challenge that the Erlang Solutions team tackle daily. We focus on constructing distributed systems capable of accommodating billions of users and transactions daily. 

Embracing language diversity

As a business leader, Choosing the right programming language for blockchain is crucial for your company. It’s like picking the perfect tool for a job—you want something that fits just right and sets you up for success in the long run, ensuring compatibility, performance and adaptability for your future.

Business leaders might find Erlang or Elixir intriguing for blockchain development, thanks to their impressive capabilities in handling concurrency, managing faults, and scaling effortlessly. These languages shine in building distributed, real-time systems, which aligns well with the demands of blockchain applications requiring high transaction throughput and unwavering reliability. 

Also, the Erlang ecosystem boasts battle-tested tools and libraries for crafting fault-tolerant distributed systems. At the same time, Elixir, with its modern syntax and developer-friendly features, adds an extra layer of appeal. Opting for Erlang or Elixir sets the stage for crafting dependable, scalable blockchain solutions, offering the stability and performance essential for critical applications without the need for a hard sell.

Our extensive expertise in language interpretation and virtual machines has emerged as indispensable know-how in crafting diverse and modern blockchain solutions.

Integration

Integrating applications on top of complex backends and providing synchronous and asynchronous interfaces among backends is a serious job.

So it’s worth checking if your team uses frontend-facing APIs such as REST and Websocket to implement responsive applications. They should comply with industry standards for compatibility and security to drive message exchanges on top of various AMQP and JMS queuing mechanisms.

Fortifying security & resilience: Building a robust shield

Ensuring the safety and resilience of your system involves assembling the right team equipped to monitor and mend mechanisms swiftly. Employing crucial resilience components and strategies is key. From dedicated secure p2p protocols to static analysis and property-based testing, adopting these techniques bolsters the security of your system. Information validation plays a pivotal role in thwarting man-in-the-middle (MitM) attacks, while back pressure mechanisms act as a shield against distributed denial of service (DDoS) attacks.

Employing both symmetrical and asymmetrical encryptions is essential for achieving the highest level of security possible. Additionally, paying attention to hardware security ensures that sensitive private keys remain accessible only via hardware security modules (HSM).

Conclusion

Blockchain technology offers unprecedented opportunities for innovation and collaboration across industries. By adhering to these guiding principles and leveraging the expertise of seasoned professionals, you can unlock the full potential of blockchain and stay ahead in today’s ever- evolving landscape.

The post Blockchain Tech Deep Dive | 6 Principles appeared first on Erlang Solutions.

by Content Team at February 29, 2024 08:14

February 28, 2024

ProcessOne

ejabberd 24.02

🚀 Introducing ejabberd 24.02: A Huge Release!

ejabberd 24.02 has just been release and well, this is a huge release with 200 commits and more in the libraries. We’ve packed this update with a plethora of new features, significant improvements, and essential bug fixes, all designed to supercharge your messaging infrastructure.


🌐 Matrix Federation Unleashed: Imagine seamlessly connecting with Matrix servers – it’s now possible! ejabberd breaks new ground in cross-platform communication, fostering a more interconnected messaging universe. We have still some ground to cover and for that we are waiting for your feedback.
🔐 Cutting-Edge Security with TLS 1.3 & SASL2: In an era where security is paramount, ejabberd steps up its game. With support for TLS 1.3 and advanced SASL2 protocols, we increase the overall security for all platform users.
🚀 Performance Enhancements with Bind 2: Faster connection times, especially crucial for mobile network users, thanks to Bind 2 and other performance optimizations.
🔄 User gains better control over on their messages: The new support for XEP-0424: Message Retraction allows users to manage their message history and remove something they posted by mistake.
🔧 Optimized server pings by relying on an existing mechanism coming from XEP-0198
📈 Streamlined API Versioning: Our refined API versioning means smoother, more flexible integration for your applications.
🧩 Enhanced Elixir, Mix and Rebar3 Support

If you upgrade ejabberd from a previous release, please review those changes:

A more detailed explanation of those topics and other features:

Matrix federation

ejabberd is now able to federate with Matrix servers. Detailed instructions to setup Matrix federation with ejabberd will be detailed in another post.

Here is a quick summary of the configuration steps:

First, s2s must be enabled on ejabberd. Then define a listener that uses mod_matrix_gw:

listen:
  -
    port: 8448
    module: ejabberd_http
    tls: true
    certfile: "/opt/ejabberd/conf/server.pem"
    request_handlers:
      "/_matrix": mod_matrix_gw

And add mod_matrix_gw in your modules:

modules:
  mod_matrix_gw:
    matrix_domain: "domain.com"
    key_name: "somename"
    key: "yourkeyinbase64"

Support TLS 1.3, Bind 2, SASL2

Support for XEP-0424 Message Retraction

With the new support for XEP-0424: Message Retraction, users of MAM message archiving can control their message archiving, with the ability to ask for deletion.

Support for XEP-0198 pings

If stream management is enabled, let mod_ping trigger XEP-0198 <r/>equests rather than sending XEP-0199 pings. This avoids the overhead of the ping IQ stanzas, which, if stream management is enabled, are accompanied by XEP-0198 elements anyway.

Update the SQL schema

The table archive has a text column named origin_id (see commit 975681). You have two methods to update the SQL schema of your existing database:

If using MySQL or PosgreSQL, you can enable the option update_sql_schema and ejabberd will take care to update the SQL schema when needed: add in your ejabberd configuration file the line update_sql_schema: true

If you are using other database, or prefer to update manually the SQL schema:

  • MySQL default schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_username_origin_id USING BTREE ON archive(username(191), origin_id(191));
  • MySQL new schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_sh_username_origin_id USING BTREE ON archive(server_host(191), username(191), origin_id(191))
  • PostgreSQL default schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_username_origin_id ON archive USING btree (username, origin_id);
  • PostgreSQL new schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_sh_username_origin_id ON archive USING btree (server_host, username, origin_id);
  • MSSQL default schema:
ALTER TABLE [dbo].[archive] ADD [origin_id] VARCHAR (250) NOT NULL;
CREATE INDEX [archive_username_origin_id] ON [archive] (username, origin_id)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
  • MSSQL new schema:
ALTER TABLE [dbo].[archive] ADD [origin_id] VARCHAR (250) NOT NULL;
CREATE INDEX [archive_sh_username_origin_id] ON [archive] (server_host, username, origin_id)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
  • SQLite default schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
CREATE INDEX i_archive_username_origin_id ON archive (username, origin_id);
  • SQLite new schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
CREATE INDEX i_archive_sh_username_origin_id ON archive (server_host, username, origin_id);

Authentication workaround for Converse.js and Strophe.js

This ejabberd release includes support for XEP-0474: SASL SCRAM Downgrade Protection, and some clients may not support it correctly yet.

If you are using Converse.js 10.1.6 or older, Movim 0.23 Kojima or older, or any other client based in Strophe.js v1.6.2 or older, you may notice that they cannot authenticate correctly to ejabberd.

To solve that problem, either update to newer versions of those programs (if they exist), or you can enable temporarily the option disable_sasl_scram_downgrade_protection in the ejabberd configuration file ejabberd.yml like this:

disable_sasl_scram_downgrade_protection: true

Support for API versioning

Until now, when a new ejabberd release changed some API command (an argument renamed, a result in a different format…), then you had to update your API client to the new API at the same time that you updated ejabberd.

Now the ejabberd API commands can have different versions, by default the most recent one is used, and the API client can specify the API version it supports.

In fact, this feature was implemented seven years ago, included in ejabberd 16.04, documented in ejabberd Docs: API Versioning… but it was never actually used!

This ejabberd release includes many fixes to get API versioning up to date, and it starts being used by several commands.

Let’s say that ejabberd 23.10 implemented API version 0, and this ejabberd 24.02 adds API version 1. You may want to update your API client to use the new API version 1… or you can continue using API version 0 and delay API update a few weeks or months.

To continue using API version 0:
– if using ejabberdctl, use the switch --version 0. For example: ejabberdctl --version 0 get_roster admin localhost
– if using mod_http_api, in ejabberd configuration file add v0 to the request_handlers path. For example: /api/v0: mod_http_api

Check the details in ejabberd Docs: API Versioning.

ejabberd commands API version 1

When you want to update your API client to support ejabberd API version 1, those are the changes to take into account:
– Commands with list arguments
– mod_http_api does not name integer and string results
– ejabberdctl with list arguments
– ejabberdctl list results

All those changes are described in the next sections.

Commands with list arguments

Several commands now use list argument instead of a string with separators (different commands used different separators: ; : \\n ,).

The commands improved in API version 1:
add_rosteritem
oauth_issue_token
send_direct_invitation
srg_create
subscribe_room
subscribe_room_many

For example, srg_create in API version 0 took as arguments:

{"group": "group3",
 "host": "myserver.com",
 "label": "Group3",
 "description": "Third group",
 "display": "group1\\ngroup2"}

now in API version 1 the command expects as arguments:

{"group": "group3",
 "host": "myserver.com",
 "label": "Group3",
 "description": "Third group",
 "display": ["group1", "group2"]}

mod_http_api not named results

There was an incoherence in mod_http_api results when they were integer/string and when they were list/tuple/rescode…: the result contained the name, for example:

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel/v0"
{"levelatom":"info"}

Staring in API version 1, when result is an integer or a string, it will not contain the result name. This is now coherent with the other result formats (list, tuple, …) which don’t contain the result name either.

Some examples with API version 0 and API version 1:

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel/v0"
{"levelatom":"info"}

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel"
"info"

$ curl -k -X POST -H "Content-type: application/json" -d '{"name": "registeredusers"}' "http://localhost:5280/api/stats/v0"
{"stat":2}

$ curl -k -X POST -H "Content-type: application/json" -d '{"name": "registeredusers"}' "http://localhost:5280/api/stats"
2

$ curl -k -X POST -H "Content-type: application/json" -d '{"host": "localhost"}' "http://localhost:5280/api/registered_users/v0"
["admin","user1"]

$ curl -k -X POST -H "Content-type: application/json" -d '{"host": "localhost"}' "http://localhost:5280/api/registered_users"
["admin","user1"]

ejabberdctl with list arguments

ejabberdctl now supports list and tuple arguments, like mod_http_api and ejabberd_xmlrpc. This allows ejabberdctl to execute all the existing commands, even some that were impossible until now like create_room_with_opts and set_vcard2_multi.

List elements are separated with , and tuple elements are separated with :.

Relevant commands:
add_rosteritem
create_room_with_opts
oauth_issue_token
send_direct_invitation
set_vcard2_multi
srg_create
subscribe_room
subscribe_room_many

Some example uses:

ejabberdctl add_rosteritem user1 localhost testuser7 localhost NickUser77l gr1,gr2,gr3 both
ejabberdctl create_room_with_opts room1 conference.localhost localhost public:false,persistent:true
ejabberdctl subscribe_room_many user1@localhost:User1,admin@localhost:Admin room1@conference.localhost urn:xmpp:mucsub:nodes:messages,u

ejabberdctl list results

Until now, ejabberdctl returned list elements separated with ;. Now in API version 1 list elements are separated with ,.

For example, in ejabberd 23.10:

$ ejabberdctl get_roster admin localhost
jan@localhost jan   none    subscribe       group1;group2
tom@localhost tom   none    subscribe       group3

Since this ejabberd release, using API version 1:

$ ejabberdctl get_roster admin localhost
jan@localhost jan   none    subscribe       group1,group2
tom@localhost tom   none    subscribe       group3

it is still possible to get the results in the old syntax, using API version 0:

$ ejabberdctl --version 0 get_roster admin localhost
jan@localhost jan   none    subscribe       group1;group2
tom@localhost tom   none    subscribe       group3

ejabberdctl help improved

ejabberd supports around 200 administrative commands, and probably you consult them in the ejabberd Docs -> API Reference page, where all the commands documentation is perfectly displayed…

The ejabberdctl command-line script already allowed to consult the commands documentation, consulting in real-time your ejabberd server to show you exactly the commands that are available. But it lacked some details about the commands. That has been improved, and now ejabberdctl shows all the information, including arguments description, examples and version notes.

For example, the connected_users_vhost command documentation as seen in the ejabberd Docs site is equivalently visible using ejabberdctl:

$ ejabberdctl help connected_users_vhost
  Command Name: connected_users_vhost

  Arguments: host::binary : Server name

  Result: connected_users_vhost::[ sessions::string ]

  Example: ejabberdctl connected_users_vhost "myexample.com"
           user1@myserver.com/tka
           user2@localhost/tka

  Tags: session

  Module: mod_admin_extra

  Description: Get the list of established sessions in a vhost

Experimental support for Erlang/OTP 27

Erlang/OTP 27.0-rc1 was recently released, and ejabberd can be compiled with it. If you are developing or experimenting with ejabberd, it would be great if you can use Erlang/OTP 27 and report any problems you find. For production servers, it’s recommended to stick with Erlang/OTP 26.2 or any previous version.

In this sense, the rebar and rebar3 binaries included with ejabberd are also updated: now they support from Erlang 24 to Erlang 27. If you want to use older Erlang versions from 20 to 23, there are compatible binaries available in git: rebar from ejabberd 21.12 and rebar3 from ejabberd 21.12.

Of course, if you have rebar or rebar3 already installed in your system, it’s preferable if you use those ones, because probably they will be perfectly compatible with whatever erlang version you have installed.

Installers and ejabberd container image

The binary installers now include the recent and stable Erlang/OTP 26.2.2 and Elixir 1.16.1. Many other dependencies were updated in the installers, the most notable is OpenSSL that has jumped to version 3.2.1.

The ejabberd container image and the ecs container image have gotten all those version updates, and also Alpine is updated to 3.19.

By the way, this container image already had support to run commands when the container starts… And now you can setup the commands to allow them fail, by prepending the character !.

Summary of compilation methods

When compiling ejabberd from source code, you may have noticed there are a lot of possibilities. Let’s take an overview before digging in the new improvements:

  • Tools to manage the dependencies and compilation:
    • Rebar: it is nowadays very obsolete, but still does the job of compiling ejabberd
    • Rebar3: the successor of Rebar, with many improvements and plugins, supports hex.pm and Elixir compilation
    • Mix: included with the Elixir programming language, supports hex.pm, and erlang compilation
  • Installation methods:
    • make install: copies the files to the system
    • make prod: prepares a self-contained OTP production release in _build/prod/, and generates a tar.gz file. This was previously named make rel
    • make dev: prepares quickly an OTP development release in _build/dev/
    • make relive: prepares the barely minimum in _build/relive/ to run ejabberd and starts it
  • Start scripts and alternatives:
    • ejabberdctl with erlang shell: start/foreground/live
    • ejabberdctl with elixir shell: iexlive
    • ejabberd console/start (this script is generated by rebar3 or mix, and does not support ejabberdctl configurable options)

For example:
– the CI dynamic tests use rebar3, and Runtime tries to test all the possible combinations
– ejabberd binary installers are built using: mix + make prod
container images are built using: mix + make prod too, and started with ejabberdctl foreground

Several combinations didn’t work correctly until now and have been fixed, for example:
mix + make relive
mix + make prod/dev + ejabberdctl iexlive
mix + make install + ejabberdctl start/foregorund/live
make uninstall buggy has an experimental alternative: make uninstall-rel
rebar + make prod with Erlang 26

Use Mix or Rebar3 by default instead of Rebar to compile ejabberd

ejabberd uses Rebar to manage dependencies and compilation since ejabberd 13.10 4d8f770. However, that tool is obsolete and unmaintained since years ago, because there is a complete replacement:

Rebar3 is supported by ejabberd since 20.12 0fc1aea. Among other benefits, this allows to download dependencies from hex.pm and cache them in your system instead of downloading them from git every time, and allows to compile Elixir files and Elixir dependencies.

In fact, ejabberd can be compiled using mix (a tool included with the Elixir programming language) since ejabberd 15.04 ea8db99 (with improvements in ejabberd 21.07 4c5641a)

For those reasons, the tool selection performed by ./configure will now be:
– If --with-rebar=rebar3 but Rebar3 not found installed in the system, use the rebar3 binary included with ejabberd
– Use the program specified in option: --with-rebar=/path/to/bin
– If none is specified, use the system mix
– If Elixir not found, use the system rebar3
– If Rebar3 not found, use the rebar3 binary included with ejabberd

Removed Elixir support in Rebar

Support for Elixir 1.1 was added as a dependency in commit 01e1f67 to ejabberd 15.02. This allowed to compile Elixir files. But since Elixir 1.4.5 (released Jun 22, 2017) it isn’t possible to get Elixir as a dependency… it’s nowadays a standalone program. For that reason, support to download old Elixir 1.4.4 as a dependency has been removed.

When Elixir support is required, better simply install Elixir and use mix as build tool:

./configure --with-rebar=mix

Or install Elixir and use the experimental Rebar3 support to compile Elixir files and dependencies:

./configure --with-rebar=rebar3 --enable-elixir

Added Elixir support in Rebar3

It is now possible to compile ejabberd using Rebar3 and support Elixir compilation. This compiles the Elixir files included in ejabberd’s lib/ path. There’s also support to get dependencies written in Elixir, and it’s possible to build OTP releases including Elixir support.

It is necessary to have Elixir installed in the system, and configure the compilation using --enable-elixir. For example:

apt-get install erlang erlang-dev elixir
git clone https://github.com/processone/ejabberd.git ejabberd
cd ejabberd
./autogen.sh
./configure --with-rebar=rebar3 --enable-elixir
make
make dev
_build/dev/rel/ejabberd/bin/ejabberdctl iexlive

Elixir versions supported

Elixir 1.10.3 is the minimum supported, but:
– Elixir 1.10.3 or higher is required to build an OTP release with make prod or make dev
– Elixir 1.11.4 or higher is required to build an OTP release if using Erlang/OTP 24 or higher
– Elixir 1.11.0 or higher is required to use make relive
– Elixir 1.13.4 with Erlang/OTP 23.0 are the lowest versions tested by Runtime

For all those reasons, if you want to use Elixir, it is highly recommended to use Elixir 1.13.4 or higher with Erlang/OTP 23.0 or higher.

make rel is renamed to make prod

When ejabberd started to use Rebar2 build tool, that tool could create an OTP release, and the target in Makefile.in was conveniently named make rel.

However, newer tools like Rebar3 and Elixir’s Mix support creating different types of releases: production, development, … In this sense, our make rel target is nowadays more properly named make prod.

For backwards compatibility, make rel redirects to make prod.

New make install-rel and make uninstall-rel

This is an alternative method to install ejabberd in the system, based in the OTP release process. It should produce exactly the same results than the existing make install.

The benefits of make install-rel over the existing method:
– this uses OTP release code from rebar/rebar3/mix, and consequently requires less code in our Makefile.in
make uninstall-rel correctly deletes all the library files

This is still experimental, and it would be great if you are able to test it and report any problem; eventually this method could replace the existing one.

Just for curiosity:
– ejabberd 13.03-beta1 got support for make uninstall was added
ejabberd 13.10 introduced Rebar build tool and code got more modular
– ejabberd 15.10 started to use the OTP directory structure for ‘make install’, and this broke make uninstall

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker…

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get:

Push

  • Fix clock issue when signing Apple push JWT tokens
  • Share Apple push JWT tokens between nodes in cluster
  • Increase allowed certificates chain depth in GCM requests
  • Use x:oob data as source for image delivered in pushes
  • Process only https urls in oob as images in pushes
  • Fix jid in disable push iq generated by GCM and Webhook service
  • Add better logging for TooManyProviderTokenUpdated error
  • Make get_push_logs command generate better error if mod_push_logger not available
  • Add command get_push_logs that can be used to retrieve info about recent pushes and errors reported by push services
  • Add support for webpush protocol for sending pushes to safari/chrome/firefox browsers

MAM

  • Expand mod_mam_http_access API to also accept range of messages

MUC

  • Update mod_muc_state_query to fix subject_author room state field
  • Fix encoding of config xdata in mod_muc_state_query

PubSub

  • Allow pubsub node owner to overwrite items published by other persons (p1db)

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Core

  • Added Matrix gateway in mod_matrix_gw
  • Support SASL2 and Bind2
  • Support tls-server-end-point channel binding and sasl2 codec
  • Support tls-exporter channel binding
  • Support XEP-0474: SASL SCRAM Downgrade Protection
  • Fix presenting features and returning results of inline bind2 elements
  • disable_sasl_scram_downgrade_protection: New option to disable XEP-0474
  • negotiation_timeout: Increase default value from 30s to 2m
  • mod_carboncopy: Teach how to interact with bind2 inline requests

Other

  • ejabberdctl: Fix startup problem when having set EJABBERD_OPTS and logger options
  • ejabberdctl: Set EJABBERD_OPTS back to "", and use previous flags as example
  • eldap: Change logic for eldap tls_verify=soft and false
  • eldap: Don’t set fail_if_no_peer_cert for eldap ssl client connections
  • Ignore hints when checking for chat states
  • mod_mam: Support XEP-0424 Message Retraction
  • mod_mam: Fix XEP-0425: Message Moderation with SQL storage
  • mod_ping: Support XEP-0198 pings when stream management is enabled
  • mod_pubsub: Normalize pubsub max_items node options on read
  • mod_pubsub: PEP nodetree: Fix reversed logic in node fixup function
  • mod_pubsub: Only care about PEP bookmarks options when creating node from scratch

SQL

  • MySQL: Support sha256_password auth plugin
  • ejabberd_sql_schema: Use the first unique index as a primary key
  • Update SQL schema files for MAM’s XEP-0424
  • New option sql_flags: right now only useful to enable mysql_alternative_upsert

Installers and Container

  • Container: Add ability to ignore failures in execution of CTL_ON_* commands
  • Container: Update to Erlang/OTP 26.2, Elixir 1.16.1 and Alpine 3.19
  • Container: Update this custom ejabberdctl to match the main one
  • make-binaries: Bump OpenSSL 3.2.1, Erlang/OTP 26.2.2, Elixir 1.16.1
  • make-binaries: Bump many dependency versions

Commands API

  • print_sql_schema: New command available in ejabberdctl command-line script
  • ejabberdctl: Rework temporary node name generation
  • ejabberdctl: Print argument description, examples and note in help
  • ejabberdctl: Document exclusive ejabberdctl commands like all the others
  • Commands: Add a new muc_sub tag to all the relevant commands
  • Commands: Improve syntax of many commands documentation
  • Commands: Use list arguments in many commands that used separators
  • Commands: set_presence: switch priority argument from string to integer
  • ejabberd_commands: Add the command API version as a tag vX
  • ejabberd_ctl: Add support for list and tuple arguments
  • ejabberd_xmlrpc: Fix support for restuple error response
  • mod_http_api: When no specific API version is requested, use the latest

Compilation with Rebar3/Elixir/Mix

  • Fix compilation with Erlang/OTP 27: don’t use the reserved word ‘maybe’
  • configure: Fix explanation of --enable-group option (#4135)
  • Add observer and runtime_tools in releases when --enable-tools
  • Update “make translations” to reduce build requirements
  • Use Luerl 1.0 for Erlang 20, 1.1.1 for 21-26, and temporary fork for 27
  • Makefile: Add install-rel and uninstall-rel
  • Makefile: Rename make rel to make prod
  • Makefile: Update make edoc to use ExDoc, requires mix
  • Makefile: No need to use escript to run rebar|rebar3|mix
  • configure: If --with-rebar=rebar3 but rebar3 not system-installed, use local one
  • configure: Use Mix or Rebar3 by default instead of Rebar2 to compile ejabberd
  • ejabberdctl: Detect problem running iex or etop and show explanation
  • Rebar3: Include Elixir files when making a release
  • Rebar3: Workaround to fix protocol consolidation
  • Rebar3: Add support to compile Elixir dependencies
  • Rebar3: Compile explicitly our Elixir files when --enable-elixir
  • Rebar3: Provide proper path to iex
  • Rebar/Rebar3: Update binaries to work with Erlang/OTP 24-27
  • Rebar/Rebar3: Remove Elixir as a rebar dependency
  • Rebar3/Mix: If dev profile/environment, enable tools automatically
  • Elixir: Fix compiling ejabberd as a dependency (#4128)
  • Elixir: Fix ejabberdctl start/live when installed
  • Elixir: Fix: FORMATTER ERROR: bad return value (#4087)
  • Elixir: Fix: Couldn’t find file Elixir Hex API
  • Mix: Enable stun by default when vars.config not found
  • Mix: New option vars_config_path to set path to vars.config (#4128)
  • Mix: Fix ejabberdctl iexlive problem locating iex in an OTP release

Full Changelog

https://github.com/processone/ejabberd/compare/23.10…24.02

ejabberd 24.02 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 24.02 first appeared on ProcessOne.

by Jérôme Sautret at February 28, 2024 19:01

WebPush support on your fluux.io instance

We’re excited to announce the latest enhancement to Fluux.io services – the integration of WebPush support. This significant update extends our services beyond
FCM/APNs, enabling push notifications for XMPP across various platforms. Now, our push notification capabilities are not limited to native mobile clients on iOS, MacOS and Android, but also extend to web applications on browsers like Safari, Chrome, Firefox and more. This includes support for mobile versions of Safari and Chrome. This advancement broadens the scope for XMPP clients, offering new possibilities and a more extensive reach. Please also note that the Webpush support is also available to our customers using our on-premise ejabberd Business Edition.

To enable it, go to your services in your fluux.io console, select “Push Notifications” and then “+ WebPush

You will be prompted for an appid (typically the domain you want to enable WebPush on). For example here fluux.io. It will generate a VAPID key that will be used by ejabberd to sign the push notification sent to the user’s browser.

Checking “View Config” will allow you to see the VAPID public key. It will be required to let the browser subscribe to notifications. Your website also needs to register a service worker that will be responsible for displaying the notification when a push is received.

As an example, we provide a small ejabberd client to test the whole workflow. It is pre-populated with a test user and associated appid/key.

The first step is to authenticate an XMPP user through your service. Then click “Enable Push“.

It will ask authorization to enable push notification and create a subscription to FCM/Apple/Mozilla services. Then the XMPP client (using strophe.js) will send a stanza to enable offline messaging. ejabberd will now send a notification to this entry point, which will send a push to the user’s browser.

To trigger it, disconnect/close all opened XMPP sessions of your test user and send him a message from another test user. Your browser will display a notification from your website with the message snippet and its author.

Alternatively, you can check the test user and its associated devices:

and send a test notification:

The post WebPush support on your fluux.io instance first appeared on ProcessOne.

by Sébastien Luquet at February 28, 2024 10:42

ejabberd turns 20

ejabberd is a piece of software that was born 20 years ago. This is a long time, even at the scale of Internet. And yet, what ejabberd represents has not always been obvious. It took us a long time to realize what was so important about ejabberd. Why have we been developing it for 20 years? Why are we pushing it further even today? What makes it so special?


ejabberd is a scalable messaging server. That sums it all and that does not do justice to this critical piece of the Internet infrastructure. Sure, it is known to be the most scalable XMPP server, so scalable that it was used as a building brick to build Whatsapp messaging service. This is something that we have always been proud of, something you can easily brag about when meeting your friends.

But is that just it? Of course not. Today, with the troubles at Twitter, something appeared clearly.

ejabberd is important because it helped build much more than Whatsapp or any other big name high-profile projects we have built. It is important because it makes people communicate, in a federated way. It is important because it implements open protocols, and now several of them: XMPP, MQTT, SIP and now Matrix.

It’s about federation

ejabberd is about federation. It is helping people on different servers, domains, companies, communities or even countries chatting together. And today even more than 20 years ago, it really matters. We have built ejabberd for 20 years, because it is a critical building brick of what makes the Internet exists. Openness, interoperability, federation. It is one of the few software that prosper outside of the spotlights and make the Internet what it is, along for example with web and mail servers.

This is something we are pondering as we are thinking about the next steps, the next 20 years. But deep down, we know for sure, what we are about. ejabberd is about federation. You will read more from us here soon. It is a tradition. No birthday celebration speech is complete without looking back at the past.

It is hard to track all ejabberd usage, but we know that ejabberd empowers more than a billion users. Not bad for a piece of code we wrote. Trillions of messages went through our lines of code.

As mentioned in this post ten years ago:

Closed protocols come and go – ejabberd and XMPP remains

Happy 20th birthday, ejabberd!

Brief timeline

The very first public commit in ejabberd’s source code was done by Alexey Shchepin the 16th November of 2002. That was in the Jabber.ru CVS server. Later when that machine had technical problems, the development code moved to JabberStudio CVS.

The first official ejabberd release was ejabberd 0.5 in November 2003. The ejabberd home page at that time was a simple HTML. It’s also worth checking the early stage of the Ejabberd Installation and Operation Guide. Notice this first ejabberd logo represented a frog-like animal sitting on a “Jabber globe bulb,” with bat wings, dangerous-looking cogs, and an Erlang suit.

After the 0.7.5 release in October 2004, ejabberd home page moved from JabberStudio to ejabberd.jabber.ru, and the bug tracker to Jabber.ru’s Bugzilla. For this Drupal site, the logo changed to a hedgehog, and that would remain ten years until the final website and logo update in 2015.

At the beginning of 2005, JabberStudio CVS had technical difficulties and the development code moved to ProcessOne SVN. Notice that ProcessOne contracted Alexey to work in J-EAI, a project based in ejabberd specially designed for some business usages, and later extended that relationship to ejabberd.

In February 2005 the source code repository moved from SVN to Git, and the bug tracker to JIRA. Around October 2010 the source code repository and the bug tracker were finally moved to GitHub.

From around that time, there’s an interview to Alexey Shchepin which covers the initial concept and years of ejabberd development. By the way, there was another interview two years ago.

The ejabberd code base got a relevant massive change with the data binarization (use Erlang binaries instead of Erlang strings for data representation) in March 2013, which jumped ejabberd version from 2.1.12 to 13.03.

The next years followed another major source code change: the movement of many C/C++ code to independent external libraries.

Today, ejabberd is not just about XMPP. Even if it is mostly know for its great XMPP support, it also supports several other protocols:
– SIP support to connect SIP phone was added in 2014 (see ejabberd 14.05)
– Support for MQTT protocol, to better support Internet of Things use cases, was added initially in the Business Edition, and some months later added to the Community Server 19.02.
– Right now Matrix federation is being introduced to allow interop between ejabberd and Matrix servers, to ejabberd Business Edition internally or on Fluux ejabberd SaaS platform. It will come later to ejabberd Community Server.

ejabberd keeps on improving at a steady pace and is happy to open to other protocols and communities.

Some source code statistics

The oldest unchanged function in ejabberd is probably one of the least used: stop/0. And the oldest functional line is the SETS macro definition.

The ejabberd repository got 1,070,325 line insertions and 901,287 line deletions. When counting both ejabberd and the dependency libraries, they got 1.693.180 line insertions and 1.108.873 line deletions. With all this, the ejabberd source code went from 13 files to 868, from 1,448 lines of code to 480,961.

Looking at the programming languages of ejabberd and its libraries, Erlang is obviously the major one, and C comes as a relevant second:

Language Files Lines Code Comments Blanks
ABNF 3 128 110 3 15
ASN.1 1 14 10 0 4
Autoconf 14 696 544 33 119
Batch 4 31 21 0 10
C 20 187549 139764 39614 8171
C Header 8 11199 3783 6979 437
C++ 1 533 442 17 74
CSS 5 532 507 0 25
Elixir 32 1888 1469 130 289
Erlang 604 247901 208912 18368 20621
JavaScript 2 23 21 1 1
Lua 1 16 16 0 0
Makefile 21 848 635 10 203
Perl 3 1086 897 63 126
Python 1 53 49 0 4
RPM Specfile 3 5408 3928 1059 421
Shell 21 4031 3263 336 432
SQL 9 3857 2994 303 560
TCL 3 1179 1002 69 108
Plain Text 13 1870 0 1561 309
YAML 20 1448 1357 53 38
The post ejabberd turns 20 first appeared on ProcessOne.

by Mickaël Rémond at February 28, 2024 10:41

ejabberd 23.01

Almost three months after the previous release, ejabberd 23.01 includes many bug fixes, several improvements and some new features.

A new module, mod_mqtt_bridge, can be used to replicate changes to MQTT topics between local and remote servers.

A more detailed explanation of those topics and other features:

Erlang/OTP 19.3 discouraged

Remember that support for Erlang/OTP 19.3 is discouraged, and will be removed in a future release. Please upgrade to Erlang/OTP 20.0 or newer. Check more details in the ejabberd 22.10 release announcement.

New MQTT bridge

This new module allows synchronizing topic changes between local and remote servers. It can be configured to replicate local changes to remote server, or can subscribe to topics on remote server and update local copies when they change.

When connecting to a remote server you can use native or websocket encapsulated protocol, and you can connect using both v4 and v5 protocol. It can authenticate using username/password pairs or with client TLS certificates.

New Hooks

Regarding MQTT support, there are several new hooks:

  • mqtt_publish: New hook for MQTT publish event
  • mqtt_subscribe and mqtt_unsubscribe: New hooks for MQTT subscribe & unsubscribe events

New option log_modules_fully

The loglevel top-level option specifies the verbosity of log files generated by ejabberd.

If you want some specific modules to log everything, independently from whatever value you have configured in loglevel, now you can use the new log_modules_fully option.

For example, if you are investigating some problem in ejabberd_sm and mod_client_state:

loglevel: warning
log_modules_fully: [ejabberd_sm, mod_client_state]

(This option works only on systems with Erlang 22 or newer).

Changes in option outgoing_s2s_families

The outgoing_s2s_families top-level option specifies which address families to try, in what order.

The default value has now been changed to try IPv6 first, as servers are within data centers where IPv6 is more commonly enabled (contrary to clients). And if it’s not present, then it’ll just fall back to IPv4.

By the way, this option is obsolete and irrelevant when using ejabberd 23.01 and Erlang/OTP 22, or newer versions of them.

Changes in option captcha_cmd

The captcha_cmd top-level option specifies the full path to a script that can generate a CAPTCHA image. Now this option may specify an Erlang module name, which should implement a function to generate a CAPTCHA image.

ejabberd does not include any such module, but there are two available in the ejabberd-contrib repository that you can install and try: mod_ecaptcha and mod_captcha_rust.

DOAP file

The protocols implemented or supported by ejabberd are defined in the corresponding source code modules since ejabberd 15.06. Until now, only the XEP number and supported version were tracked. Since now, it’s possible to document what ejabberd version first implemented it, the implementation status and an arbitrary comment.

That information until now was only used by the script tools/check_xep_versions.sh. A new script is added, tools/generate-doap.sh, to generate a DOAP file with that information. A new target is added to Makefile: make doap.

And that DOAP file is now published as ejabberd.doap in the git repository. That file is read by the XMPP.org website to show ejabberd’s protocols, see XMPP Servers: ejabberd.

VSCode

Support for Visual Studio Code and variants is vastly improved. Thanks to the Erlang LS VSCode extension, the ejabberd git repository includes support for developing, compiling and debugging ejabberd with Visual Studio Code, VSCodium, Coder’s code-server and Github Codespaces.

See more details in the ejabberd Docs: VSCode page.

ChangeLog

General

  • Add misc:uri_parse/2 to allow declaring default ports for protocols
  • CAPTCHA: Add support to define module instead of path to script
  • Clustering: Handle mnesia_system_event mnesia_up when other node joins this (#3842)
  • ConverseJS: Don’t set i18n option because Converse enforces it instead of browser lang (#3951)
  • ConverseJS: Try to redirect access to files mod_conversejs to CDN when there is no local copies
  • ext_mod: compile C files and install them in ejabberd’s priv
  • ext_mod: Support to get module status from Elixir modules
  • make-binaries: reduce log output
  • make-binaries: Bump zlib version to 1.2.13
  • MUC: Don’t store mucsub presence events in offline storage
  • MUC: hibernation_time is not an option worth storing in room state (#3946)
  • Multicast: Jid format when multicastc was cached (#3950)
  • mysql: Pass ssl options to mysql driver
  • pgsql: Do not set standard_conforming_strings to off (#3944)
  • OAuth: Accept jid as a HTTP URL query argument
  • OAuth: Handle when client is not identified
  • PubSub: Expose the pubsub#type field in disco#info query to the node (#3914)
  • Translations: Update German translation

Admin

  • api_permissions: Fix option crash when doesn’t have who: section
  • log_modules_fully: New option to list modules that will log everything
  • outgoing_s2s_families: Changed option’s default to IPv6, and fall back to IPv4
  • Fix bash completion when using Relive or other install methods
  • Fix portability issue with some shells (#3970)
  • Allow admin command to subscribe new users to members_only rooms
  • Use alternative split/2 function that works with Erlang/OTP as old as 19.3
  • Silent warning in OTP24 about not specified cacerts in SQL connections
  • Fix compilation warnings with Elixir 1.14

DOAP

  • Support extended -protocol erlang attribute
  • Add extended RFCs and XEP details to some protocol attributes
  • tools/generate-doap.sh: New script to generate DOAP file, add make doap (#3915)
  • ejabberd.doap: New DOAP file describing ejabberd supported protocols

MQTT

  • Add MQTT bridge module
  • Add support for certificate authentication in MQTT bridge
  • Implement reload in MQTT bridge
  • Add support for websockets to MQTT bridge
  • Recognize ws5/wss5 urls in MQTT bridge
  • mqtt_publish: New hook for MQTT publish event
  • mqtt_(un)subscribe: New hooks for MQTT subscribe & unsubscribe events

VSCode

  • Improve .devcontainer to use use devcontainer image and .vscode
  • Add .vscode files to instruct VSCode how to run ejabberd
  • Add Erlang LS default configuration
  • Add Elvis default configuration

Full Changelog

https://github.com/processone/ejabberd/compare/22.10…23.01

ejabberd 23.01 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The Docker image is in Docker Hub, and there’s an alternative Container image in GitHub Packages.

If you suspect that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 23.01 first appeared on ProcessOne.

by Jérôme Sautret at February 28, 2024 10:40

ejabberd 23.04

This new ejabberd 23.04 release includes many improvements and bug fixes, as well as some new features.

ejabberd 23.04

  • Many SQL database improvements
  • mod_mam support for XEP-0425: Message Moderation
  • New mod_muc_rtbl, Real-Time Block List for MUC rooms
  • Binaries use Erlang/OTP 25.3, and changes in containers

A more detailed explanation of these topics and other features:

Many improvements to SQL databases

There are many improvements in the area of SQL databases (see #3980 and #3982):

  • Added support for migrating MySQL and MS SQL to new schema, fixed a long-standing bug, and many other improvements.
  • Regarding MS SQL, there are schema fixes, added support for new schema and the corresponding schema migration, along with other minor improvements and bugfixes.
  • The automated ejabberd tests now also run on updated schema databases, and support for running tests on MS SQL has been added.
  • and other minor SQL schema inconsistencies, removed unnecessary indexes and changed PostgreSQL SERIAL columns to BIGSERIAL columns.

Please upgrade your existing SQL database, check the notes later in this document!

Added mod_mam support for XEP-0425: Message Moderation

XEP-0425: Message Moderation allows a Multi-User Chat (XEP-0045) moderator to moderate certain group chat messages, for example by removing them from the group chat history, as part of an effort to address and resolve issues such as message spam, inappropriate venue language, or revealing private personal information of others. It also allows moderators to correct a message on another user’s behalf, or flag a message as inappropriate, without having to retract it.

Clients that currently support this XEP are Gajim, Converse.js, Monocles, and have read-only support Poezio and XMPP Web.

New mod_muc_rtbl module

This new module implements Real-Time Block List for MUC rooms. It works by monitoring remote pubsub nodes according to the specification described in xmppbl.org.

captcha_url option now accepts auto value

In recent ejabberd releases, captcha_cmd got support for macros (in ejabberd 22.10) and support for using modules (in ejabberd 23.01).

Now captcha_url gets an improvement: if set to auto, it tries to detect the URL automatically, taking into account the ejabberd configuration. This is now the default. This should be good enough in most cases, but manually setting the URL may be necessary when using port forwarding or very specific setups.

Erlang/OTP 19.3 is deprecated

This is the last ejabberd release with support for Erlang/OTP 19.3. If you have not already done so, please upgrade to Erlang/OTP 20.0 or newer before the next ejabberd release. See the ejabberd 22.10 release announcement for more details.

About the binary packages provided for ejabberd:

  • The binary installers and container images now use Erlang/OTP 25.3 and Elixir 1.14.3.
  • The mix, ecs and ejabberd container images now use Alpine 3.17.
  • The ejabberd container image now supports an alternative build method, useful to work around a problem in QEMU and Erlang 25 when building the image for the arm64 architecture.

Erlang node name in ecs container image

The ecs container image is built using the files from docker-ejabberd/ecs and published in docker.io/ejabberd/ecs. This image generally gets only minimal fixes, no major or breaking changes, but in this release it got one change that requires administrator intervention.

The Erlang node name is now fixed to ejabberd@localhost by default, instead of being variable based on the container hostname. If you previously allowed ejabberd to choose its node name (which was random), it will now create a new mnesia database instead of using the previous one:

$ docker exec -it ejabberd ls /home/ejabberd/database/
ejabberd@1ca968a0301a
ejabberd@localhost
...

A simple solution is to create a container that provides ERLANG_NODE_ARG with the old erlang node name, for example:

docker run ... -e ERLANG_NODE_ARG=ejabberd@1ca968a0301a

or in docker-compose.yml

version: '3.7'
services:
  main:
    image: ejabberd/ecs
    environment:
      - ERLANG_NODE_ARG=ejabberd@1ca968a0301a

Another solution is to change the mnesia node name in the mnesia spool files.

Other improvements to the ecs container image

In addition to the previously mentioned change to the default erlang node name, the ecs container image has received other improvements:

  • For each commit to the docker-ejabberd repository that affects ecs and mix container images, those images are uploaded as artifacts and are available for download in the corresponding runs.
  • When a new release is tagged in the docker-ejabberd repository, the image is automatically published to ghcr.io/processone/ecs, in addition to being manually published to the Docker Hub.
  • There are new sections in the ecs README file: Clustering and Clustering Example.

Documentation Improvements

In addition to the usual improvements and fixes, some sections of the ejabberd documentation have been improved:

Acknowledgments

We would like to thank the following people for their contributions to the source code, documentation, and translation for this release:

And also to all the people who help solve doubts and problems in the ejabberd chatroom and issue tracker.

Updating SQL Databases

These notes allow you to apply the SQL database schema improvements in this ejabberd release to your existing SQL database. Please consider which database you are using and whether it is the default or the new schema.

PostgreSQL new schema:

Fixes a long-standing bug in the new schema on PostgreSQL. The fix for all existing affected installations is the same:

ALTER TABLE vcard_search DROP CONSTRAINT vcard_search_pkey;
ALTER TABLE vcard_search ADD PRIMARY KEY (server_host, lusername);

PosgreSQL default or new schema:

To convert columns to allow up to 2 billion rows in these tables. This conversion requires full table rebuilds and will take a long time if the tables already have many rows. Optional: This is not necessary if the tables will never grow large.

ALTER TABLE archive ALTER COLUMN id TYPE BIGINT;
ALTER TABLE privacy_list ALTER COLUMN id TYPE BIGINT;
ALTER TABLE pubsub_node ALTER COLUMN nodeid TYPE BIGINT;
ALTER TABLE pubsub_state ALTER COLUMN stateid TYPE BIGINT;
ALTER TABLE spool ALTER COLUMN seq TYPE BIGINT;

PostgreSQL or SQLite default schema:

DROP INDEX i_rosteru_username;
DROP INDEX i_sr_user_jid;
DROP INDEX i_privacy_list_username;
DROP INDEX i_private_storage_username;
DROP INDEX i_muc_online_users_us;
DROP INDEX i_route_domain;
DROP INDEX i_mix_participant_chan_serv;
DROP INDEX i_mix_subscription_chan_serv_ud;
DROP INDEX i_mix_subscription_chan_serv;
DROP INDEX i_mix_pam_us;

PostgreSQL or SQLite new schema:

DROP INDEX i_rosteru_sh_username;
DROP INDEX i_sr_user_sh_jid;
DROP INDEX i_privacy_list_sh_username;
DROP INDEX i_private_storage_sh_username;
DROP INDEX i_muc_online_users_us;
DROP INDEX i_route_domain;
DROP INDEX i_mix_participant_chan_serv;
DROP INDEX i_mix_subscription_chan_serv_ud;
DROP INDEX i_mix_subscription_chan_serv;
DROP INDEX i_mix_pam_us;

And now add index that might be missing

In PostgreSQL:

CREATE INDEX i_push_session_sh_username_timestamp ON push_session USING btree (server_host, username, timestamp);

In SQLite:

CREATE INDEX i_push_session_sh_username_timestamp ON push_session (server_host, username, timestamp);

MySQL default schema:

ALTER TABLE rosterusers DROP INDEX i_rosteru_username;
ALTER TABLE sr_user DROP INDEX i_sr_user_jid;
ALTER TABLE privacy_list DROP INDEX i_privacy_list_username;
ALTER TABLE private_storage DROP INDEX i_private_storage_username;
ALTER TABLE muc_online_users DROP INDEX i_muc_online_users_us;
ALTER TABLE route DROP INDEX i_route_domain;
ALTER TABLE mix_participant DROP INDEX i_mix_participant_chan_serv;
ALTER TABLE mix_participant DROP INDEX i_mix_subscription_chan_serv_ud;
ALTER TABLE mix_participant DROP INDEX i_mix_subscription_chan_serv;
ALTER TABLE mix_pam DROP INDEX i_mix_pam_u;

MySQL new schema:

ALTER TABLE rosterusers DROP INDEX i_rosteru_sh_username;
ALTER TABLE sr_user DROP INDEX i_sr_user_sh_jid;
ALTER TABLE privacy_list DROP INDEX i_privacy_list_sh_username;
ALTER TABLE private_storage DROP INDEX i_private_storage_sh_username;
ALTER TABLE muc_online_users DROP INDEX i_muc_online_users_us;
ALTER TABLE route DROP INDEX i_route_domain;
ALTER TABLE mix_participant DROP INDEX i_mix_participant_chan_serv;
ALTER TABLE mix_participant DROP INDEX i_mix_subscription_chan_serv_ud;
ALTER TABLE mix_participant DROP INDEX i_mix_subscription_chan_serv;
ALTER TABLE mix_pam DROP INDEX i_mix_pam_us;

Add index that might be missing:

CREATE INDEX i_push_session_sh_username_timestamp ON push_session (server_host, username(191), timestamp);

MS SQL

DROP INDEX [rosterusers_username] ON [rosterusers];
DROP INDEX [sr_user_jid] ON [sr_user];
DROP INDEX [privacy_list_username] ON [privacy_list];
DROP INDEX [private_storage_username] ON [private_storage];
DROP INDEX [muc_online_users_us] ON [muc_online_users];
DROP INDEX [route_domain] ON [route];
go

MS SQL schema was missing some tables added in earlier versions of ejabberd:

CREATE TABLE [dbo].[mix_channel] (
    [channel] [varchar] (250) NOT NULL,
    [service] [varchar] (250) NOT NULL,
    [username] [varchar] (250) NOT NULL,
    [domain] [varchar] (250) NOT NULL,
    [jid] [varchar] (250) NOT NULL,
    [hidden] [smallint] NOT NULL,
    [hmac_key] [text] NOT NULL,
    [created_at] [datetime] NOT NULL DEFAULT GETDATE()
) TEXTIMAGE_ON [PRIMARY];

CREATE UNIQUE CLUSTERED INDEX [mix_channel] ON [mix_channel] (channel, service)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

CREATE INDEX [mix_channel_serv] ON [mix_channel] (service)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

CREATE TABLE [dbo].[mix_participant] (
    [channel] [varchar] (250) NOT NULL,
    [service] [varchar] (250) NOT NULL,
    [username] [varchar] (250) NOT NULL,
    [domain] [varchar] (250) NOT NULL,
    [jid] [varchar] (250) NOT NULL,
    [id] [text] NOT NULL,
    [nick] [text] NOT NULL,
    [created_at] [datetime] NOT NULL DEFAULT GETDATE()
) TEXTIMAGE_ON [PRIMARY];

CREATE UNIQUE INDEX [mix_participant] ON [mix_participant] (channel, service, username, domain)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

CREATE INDEX [mix_participant_chan_serv] ON [mix_participant] (channel, service)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

CREATE TABLE [dbo].[mix_subscription] (
    [channel] [varchar] (250) NOT NULL,
    [service] [varchar] (250) NOT NULL,
    [username] [varchar] (250) NOT NULL,
    [domain] [varchar] (250) NOT NULL,
    [node] [varchar] (250) NOT NULL,
    [jid] [varchar] (250) NOT NULL
);

CREATE UNIQUE INDEX [mix_subscription] ON [mix_subscription] (channel, service, username, domain, node)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

CREATE INDEX [mix_subscription_chan_serv_ud] ON [mix_subscription] (channel, service, username, domain)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

CREATE INDEX [mix_subscription_chan_serv_node] ON [mix_subscription] (channel, service, node)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

CREATE INDEX [mix_subscription_chan_serv] ON [mix_subscription] (channel, service)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

CREATE TABLE [dbo].[mix_pam] (
    [username] [varchar] (250) NOT NULL,
    [channel] [varchar] (250) NOT NULL,
    [service] [varchar] (250) NOT NULL,
    [id] [text] NOT NULL,
    [created_at] [datetime] NOT NULL DEFAULT GETDATE()
) TEXTIMAGE_ON [PRIMARY];

CREATE UNIQUE CLUSTERED INDEX [mix_pam] ON [mix_pam] (username, channel, service)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);

go

MS SQL also had some incompatible column types:

ALTER TABLE [dbo].[muc_online_room] ALTER COLUMN [node] VARCHAR (250);
ALTER TABLE [dbo].[muc_online_room] ALTER COLUMN [pid] VARCHAR (100);
ALTER TABLE [dbo].[muc_online_users] ALTER COLUMN [node] VARCHAR (250);
ALTER TABLE [dbo].[pubsub_node_option] ALTER COLUMN [name] VARCHAR (250);
ALTER TABLE [dbo].[pubsub_node_option] ALTER COLUMN [val] VARCHAR (250);
ALTER TABLE [dbo].[pubsub_node] ALTER COLUMN [plugin] VARCHAR (32);
go

… and mqtt_pub table was incorrectly defined in old schema:

ALTER TABLE [dbo].[mqtt_pub] DROP CONSTRAINT [i_mqtt_topic_server];
ALTER TABLE [dbo].[mqtt_pub] DROP COLUMN [server_host];
ALTER TABLE [dbo].[mqtt_pub] ALTER COLUMN [resource] VARCHAR (250);
ALTER TABLE [dbo].[mqtt_pub] ALTER COLUMN [topic] VARCHAR (250);
ALTER TABLE [dbo].[mqtt_pub] ALTER COLUMN [username] VARCHAR (250);
CREATE UNIQUE CLUSTERED INDEX [dbo].[mqtt_topic] ON [mqtt_pub] (topic)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
go

… and sr_group index/PK was inconsistent with other DBs:

ALTER TABLE [dbo].[sr_group] DROP CONSTRAINT [sr_group_PRIMARY];
CREATE UNIQUE CLUSTERED INDEX [sr_group_name] ON [sr_group] ([name])
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
go

ChangeLog

General

  • New s2s_out_bounce_packet hook
  • Re-allow anonymous connection for connection without client certificates (#3985)
  • Stop ejabberd_system_monitor before stopping node
  • captcha_url option now accepts auto value, and it’s the default
  • mod_mam: Add support for XEP-0425: Message Moderation
  • mod_mam_sql: Fix problem with results of mam queries using rsm with max and before
  • mod_muc_rtbl: New module for Real-Time Block List for MUC rooms (#4017)
  • mod_roster: Set roster name from XEP-0172, or the stored one (#1611)
  • mod_roster: Preliminary support to store extra elements in subscription request (#840)
  • mod_pubsub: Pubsub xdata fields max_item/item_expira/children_max use max not infinity
  • mod_vcard_xupdate: Invalidate vcard_xupdate cache on all nodes when vcard is updated

Admin

  • ext_mod: Improve support for loading *.so files from ext_mod dependencies
  • Improve output in gen_html_doc_for_commands command
  • Fix ejabberdctl output formatting (#3979)
  • Log HTTP handler exceptions

MUC

  • New command get_room_history
  • Persist none role for outcasts
  • Try to populate room history from mam when unhibernating
  • Make mod_muc_room:set_opts process persistent flag first
  • Allow passing affiliations and subscribers to create_room_with_opts command
  • Store state in db in mod_muc:create_room()
  • Make subscribers members by default

SQL schemas

  • Fix a long standing bug in new schema migration
  • update_sql command: Many improvements in new schema migration
  • update_sql command: Add support to migrate MySQL too
  • Change PostgreSQL SERIAL to BIGSERIAL columns
  • Fix minor SQL schema inconsistencies
  • Remove unnecessary indexes
  • New SQL schema migrate fix

MS SQL

  • MS SQL schema fixes
  • Add new schema for MS SQL
  • Add MS SQL support for new schema migration
  • Minor MS SQL improvements
  • Fix MS SQL error caused by ORDER BY in subquery

SQL Tests

  • Add support for running tests on MS SQL
  • Add ability to run tests on upgraded DB
  • Un-deprecate ejabberd_config:set_option/2
  • Use python3 to run extauth.py for tests
  • Correct README for creating test docker MS SQL DB
  • Fix TSQLlint warnings in MSSQL test script

Testing

  • Fix Shellcheck warnings in shell scripts
  • Fix Remark-lint warnings
  • Fix Prospector and Pylint warnings in test extauth.py
  • Stop testing ejabberd with Erlang/OTP 19.3, as Github Actions no longer supports ubuntu-18.04
  • Test only with oldest OTP supported (20.0), newest stable (25.3) and bleeding edge (26.0-rc2)
  • Upload Common Test logs as artifact in case of failure

ecs container image

  • Update Alpine to 3.17 to get Erlang/OTP 25 and Elixir 1.14
  • Add tini as runtime init
  • Set ERLANG_NODE fixed to ejabberd@localhost
  • Upload images as artifacts to Github Actions
  • Publish tag images automatically to ghcr.io

ejabberd container image

  • Update Alpine to 3.17 to get Erlang/OTP 25 and Elixir 1.14
  • Add METHOD to build container using packages (#3983)
  • Add tini as runtime init
  • Detect runtime dependencies automatically
  • Remove unused Mix stuff: ejabberd script and static COOKIE
  • Copy captcha scripts to /opt/ejabberd-*/lib like the installers
  • Expose only HOME volume, it contains all the required subdirs
  • ejabberdctl: Don’t use .../releases/COOKIE, it’s no longer included

Installers

  • make-binaries: Bump versions, e.g. erlang/otp to 25.3
  • make-binaries: Fix building with erlang/otp 25.x
  • make-packages: Fix for installers workflow, which didn’t find lynx

Full Changelog

https://github.com/processone/ejabberd/compare/23.01…23.04

ejabberd 23.04 download & feedback

As usual, the release is tagged in the git source repository on GitHub.

The source package and installers are available on the ejabberd Downloads page. To verify the *.asc signature files, see How to verify the integrity of ProcessOne downloads.

For convenience, there are alternative download locations such as the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you think you’ve found a bug, please search or file a bug report at GitHub Issues.

The post ejabberd 23.04 first appeared on ProcessOne.

by Jérôme Sautret at February 28, 2024 09:27

Automatic schema update in ejabberd

ejabberd 23.10 has a new feature that is currently in beta testing:
Automatic relational schema creation and update.

Previously, if you were using ejabberd with an external relational database, you might have to manually apply some schema changes that come with new features when you upgrade to a new ejabberd release. ejabberd can now handle this schema upgrade automatically. It can also create the schema on an empty database during a new deployment. It works with both old and new schemas.

This feature paves the way for more changes to our schema in the future. It is currently in beta testing, we recommend backing up your database before using it. To enable it in ejabberd 23.10, set this top-level option in your ejabberd.yml configuration file and restart ejabberd:

update_sql_schema: true

This is compatible with the following relational databases:

Feel free to test it and report any problems on GitHub Issues.

The post Automatic schema update in ejabberd first appeared on ProcessOne.

by Jérôme Sautret at February 28, 2024 09:27

February 27, 2024

ProcessOne

ejabberd 23.10

A new ejabberd release, ejabberd 23.10, is now published with more than 150 commits since the previous 23.04. It includes many new features and improvements, and also many more bugfixes.

  • Support for XEP-0402: PEP Native Bookmarks
  • Support for XEP-0421: Occupant Id
  • Many new options and features

A more detailed explanation of improvements and features:

Added support for XEP-0402: PEP Native Bookmarks

XEP-0402: PEP Native Bookmarks describes how to keep a list of chatroom bookmarks as PEP nodes on the PubSub service. That’s an improvement over XEP-0048: Bookmark Storage which described how to store in a single Private XML Storage or a single PEP node.

mod_private now supports the bookmark conversion described in XEP-0402:
ejabberd synchronizes XEP-0402 bookmarks, private storage bookmarks and XEP-0048 bookmarks.

In this sense, the bookmarks_to_pep command performs an initial synchronization of bookmarks, getting bookmarks from Private XML Storage and stores them in PEP nodes as described both in XEP-0048 and XEP-0402.

New mod_muc_occupantid module with support for XEP-0421: Occupant Id

XEP-0421: Anonymous unique occupant identifiers for MUCs is useful in anonymous MUC rooms, message correction and message retractions. Right now the only client found to support XEP-0421 is Dino, since version 0.4.

ejabberd now implements XEP-0421 0.1.0 in mod_muc_occupantid. The module is quite simple and has no configurable options: just enabled it in the modules section in your ejabberd.yml configuration file and restart ejabberd or reload_config.

New option auth_external_user_exists_check

The new option auth_external_user_exists_check makes user_check hook work better with authentication methods that don’t have a way to determine if user exists. This happens, for example, in the case of jwt and cert based authentication. As result, enabling this option improves mod_offline and mod_mam handling of offline messages to those users. This reuses information stored by mod_last for this purpose.

Improved offline messages handling when using authentication methods without users lists

Authentication methods that manage users list outside of ejabberd, like for example JWT token or tls certificate authentication, had issue with processing of offline messages. Those methods didn’t have a way to tell if given user existed when user was not logged in, and that did block processing of offline messages, which were only performed for users that we know did exists. This release adds code that also consults data stored by mod_last for that purpose, and it should fix offline messages for users that were logged at least once before.

Changes in get_roster command

There are some changes in the result output of the get_roster command defined in mod_admin_extra:

  • ask is renamed to pending
  • group is renamed to groups
  • the new groups is a list with all the group names
  • a contact that is in several groups is now listed only once, and the groups are properly listed.

For example, let’s say that admin@localhost has two contacts: a contact is present in two groups (group1 and group2), the other contact is only present in a group (group3).

The old get_roster command in ejabberd 23.04 and previous versions was like:

$ ejabberdctl get_roster admin localhost
jan@localhost jan   none    subscribe       group1
jan@localhost jan   none    subscribe       group2
tom@localhost tom   none    subscribe       group3

The new get_roster command in ejabberd 23.XX and newer versions returns as result:

$ ejabberdctl get_roster admin localhost
jan@localhost jan   none    subscribe       group1;group2
tom@localhost tom   none    subscribe       group3

Notice that the ejabberdctl command-line tool since now will represent list elements in results separated with ;

New halt command

Until now there were two API commands to stop ejabberd:

  • stop stops ejabberd gracefully, calling to stop each of its components (client sessions, modules, listeners, …)
  • stop_kindly first of all sends messages to all the online users and all the online MUC rooms, waits a few seconds, and then stops ejabberd gracefully.

Those comands are useful when there’s an ejabberd running for many time, with many users connected, and you want to stop it.

A new command is now added: halt, which abruptly stops the ejabberd node, without taking care to close gracefully any of its components. It also returns error code 1. This command is useful if some problem is detected while ejabberd is starting.

For example, it is now used in the ecs and the ejabberd container images when CTL_ON_CREATE or CTL_ON_START were provided and failed to execute correctly. See docker-ejabberd#97 for details.

MySQL driver improvements

MySQL driver will now use prepared statements whenever possible, this should improve database load. This feature can be disabled with sql_prepared_statement: false.

We also added alternative implementation of upsert that doesn’t use replace .. or insert ... on conflict update, as in some versions of MySQL this can lead to excessive deadlocks. We switch between implementations based on version but it’s possible to override version check by having:

sql_flags:
  - mysql_alternative_upsert

inside config file.

New unix_socket listener option

When defining a listener, the port option can be a port number or a string in form "unix:/path/to/socket" to create and listen on a unix domain socket /path/to/socket.

The new unix_socket listener option allows to customize some options of that unix socket file.

The configurable options are:

  • mode: which should be an octal
  • owner: which should be an integer
  • group: which should be an integer

Those values have no default: only when they are set, they are changed.

Example configuration:

listen:
  -
    port: "unix://tmp/asd/socket"
    unix_socket:
      mode: '0775'
      owner: 117
      group: 135

New install_contrib_modules top-level option

The new install_contrib_modules top-level option lets you declare a list of modules from ejabberd-contrib that will be installed automatically by ejabberd when it is being started. This option is read during ejabberd start or configuration reload.

This option is equivalent to installing the module manually with the command ejabberdctl module_install whatever. It is useful when deploying ejabberd automatically with a configuration file that mentions a contrib module.

For example, let’s enable and configure some modules from ejabberd-contrib, and use the new option to ensure they get installed, all of this the very first time ejabberd runs. Extract from ejabberd.yml:

...

install_contrib_modules:
  - mod_statsdx
  - mod_webadmin_config

modules:
  mod_statsdx:
    hooks: true
  mod_webadmin_config: {}
  ...

The ejabberd.log file will show something like:

2023-09-25 15:32:40.282446+02:00 [info] Loading configuration from _build/relive/conf/ejabberd.yml
Module mod_statsdx has been installed and started.
The mod_statsdx configuration in your ejabberd.yml is used.
Module mod_webadmin_config has been installed and started.
The mod_webadmin_config configuration in your ejabberd.yml is used.
2023-09-25 15:32:42.201199+02:00 [info] Configuration loaded successfully

...
2023-09-25 15:32:43.163099+02:00 [info] ejabberd 23.04.115 is started in the node ejabberd@localhost in 3.15s
2023-09-25 15:32:47.069875+02:00 [info] Reloading configuration from _build/relive/conf/ejabberd.yml
2023-09-25 15:32:47.100917+02:00 [info] Configuration reloaded successfully

New notify_on option in mod_push

mod_push has a new option: notify_on, which possible values:

  • all: generate a notification on any kind of XMPP stanzas. This is the default value.
  • messages: notifications are only triggered for actual chat messages with a body text (or some encrypted payload).

Add support to register nick in a room

A nick can be registered in the MUC service since ejabberd 13.06, this prevents anybody else to use that nick in any room of that MUC service.

Now ejabberd gets support to register a nick in a room, as described in XEP-0045 section 7.10 Registering with a Room

Registering a nick in the MUC service or in a room is mutually exclusive:
– A nick that is registered in the service cannot be registered in any room, not even the original owner can register it.
– Similarly, a nick registered in any room cannot be registered in the service.

MUC room option allow_private_messages converted to allowpm

Until ejabberd 23.04, MUC rooms had a configurable option called allow_private_messages with possible values true or false.

Since ejabberd 23.10, that option is converted into allowpm, with possible values:

  • anyone: equivalent to allow_private_messages=true
  • none: equivalent to allow_private_messages=false
  • participants
  • moderators

gen_mod API to simplify hooks and IQ handlers registration

If you wrote some ejabberd module, you may want to update your module to the simplified gen_mod API. This is not mandatory, because the old way to do this is supported.

Until now, erlang modules that implemented ejabberd’s gen_mod behaviour called ejabberd_hooks:add and gen_iq_handler:add_iq_handler in ther start functions. Similarly, in their stop function they called ejabberd_hooks:delete and gen_iq_hanlder:remove_iq_handler.

Since ejabberd 23.10, there is an alternative way to do this: let your start function return {ok, List}, where List is a list of iq handlers and hooks that you want your module to register to. No need to unregister them in your stop function!

How to change your module to the new API? See the changes done in mod_adhoc.erl in commit 60002fc.

MS SQL requirements

To use the Microsoft SQL Server database, the libtdsodbc library is required, as explained in the corresponding section of the ejabberd Docs: Configuration > Databases > Microsoft SQL Server

Since this release, the ejabberd container image includes this library.

Please notice if you install ejabberd using the binary installers and want to use MS SQL: you must install the libtdsodbc libraries on your machine. It cannot be included in the ejabberd installer due to the nature of the odbc drivers being dynamic depending on the respective odbc backend in use.

Erlang/OTP 20.0 or higher required

This ejabberd release requires Erlang/OTP 20.0 or newer to compile and run, support for Erlang/OTP 19.3 is deprecated. If you are still using Erlang/OTP 19.3, please update to a more recent Erlang version. For example, the ejabberd binary installers and container images are using Erlang/OTP 26.1. That requirement increase was announced almost a year ago, check more details in the ejabberd 22.10 release announcement.

If you are still using Erlang/OTP 19.3 and cannot update it right now, there’s still a possibility to compile ejabberd 23.10 with Erlang/OTP 19.3, but please notice: there is no guarantee or support that it will compile or run correctly. If interested, revert the changed line in the file configure.ac done in commit d299b97 and recompile.

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker…

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get:

  • Push:
    • Add support for Webpush
    • Various APNS & GCM fixes and optimizations
    • async calls to push backends
    • Improved error messages
    • Improve error detection and reconnection strategy
    • New mod_push_logger module to log push related events
  • Matrix:
    • Add support for Matrix v10 rooms
    • Add SRV support in mod_matrix_gw_s2s
  • Misc:
    • Add max_concurrent_connections option to webhook
    • Add module for logging chat & jingle events in a separate file
    • Add retraction handling in MAM for p1db & dynamodb databases

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Compilation

  • Erlang/OTP: Raise the requirement to Erlang/OTP 20.0 as a minimum
  • CI: Update tests to Erlang/OTP 26 and recent Elixir
  • Move Xref and Dialyzer options from workflows to rebar.config
  • Add sections to rebar.config to organize its content
  • Dialyzer dirty workarounds because re:mp() is not an exported type
  • When installing module already configured, keep config as example
  • Elixir 1.15 removed support for --app
  • Elixir: Improve support to stop external modules written in Elixir
  • Elixir: Update syntax of function calls as recommended by Elixir compiler
  • Elixir: When building OTP release with mix, keep ERLANG_NODE=ejabberd@localhost
  • ejabberdctl: Pass ERLANG_OPTS when calling erl to parse the INET_DIST_INTERFACE (#4066

Commands

  • create_room_with_opts: Fix typo and move examples to args_example (#4080)
  • etop: Let ejabberdctl etop work in a release (if observer application is available)
  • get_roster: Command now returns groups in a list instead of newlines (#4088)
  • halt: New command to halt ejabberd abruptly with an error status code
  • ejabberdctl: Fix calling ejabberdctl command with wrong number of arguments with Erlang 26
  • ejabberdctl: Improve printing lists in results
  • ejabberdctl: Support policy=user in the help and return proper arguments
  • ejabberdctl: Document how to stop a debug shell: control+g
  • ejabberdctl: Support policy=user in the help and return proper arguments
  • ejabberdctl: Improve printing lists in results

Container

  • Dockerfile: Add missing dependency for mssql databases
  • Dockerfile: Reorder stages and steps for consistency
  • Dockerfile: Use Alpine as base for METHOD=package
  • Dockerfile: Rename packages to improve compatibility
  • Dockerfile: Provide specific OTP and elixir vsn for direct compilation
  • Halt ejabberd if a command in CTL_ON_ fails during ejabberd startup

Core

  • auth_external_user_exists_check: New option (#3377)
  • gen_mod: Extend gen_mod API to simplify hooks and IQ handlers registration
  • gen_mod: Add shorter forms for gen_mod hook/iq_handler API
  • gen_mod: Update modules to the new gen_mod API
  • install_contrib_modules: New option to define contrib modules to install automatically
  • unix_socket: New listener option, useful when setting unix socket files (#4059)
  • ejabberd_systemd: Add a few debug messages
  • ejabberd_systemd: Avoid using gen_server timeout (#4054)(#4058)
  • ejabberd_listener: Increase default listen queue backlog value to 128, which is the default value on both Linux and FreeBSD (#4025)
  • OAuth: Handle badpass error message
  • When sending message on behalf of user, trigger user_send_packet (#3990)
  • Web Admin: In roster page move the AddJID textbox to top (#4067)
  • Web Admin: Show a warning when visiting webadmin with non-privileged account (#4089)

Docs

  • Example configuration: clarify 5223 tls options; specify s2s shaper
  • Make sure that policy=user commands have host instead of server arg in docs
  • Improve syntax of many command descriptions for the Docs site
  • Move example Perl extauth script from ejabberd git to Docs site
  • Remove obsolete example files, and add link in Docs to the archived copies

Installers (make-binaries)

  • Bump Erlang/OTP version to 26.1.1, and other dependencies
  • Remove outdated workaround
  • Don’t build Linux-PAM examples
  • Fix check for current Expat version
  • Apply minor simplifications
  • Don’t duplicate config entries
  • Don’t hard-code musl version
  • Omit unnecessary glibc setting
  • Set kernel version for all builds
  • Let curl fail on HTTP errors

Modules

  • mod_muc_log: Add trailing backslash to URLs shown in disco info
  • mod_muc_occupantid: New module with support for XEP-0421 Occupant Id (#3397)
  • mod_muc_rtbl: Better error handling in (#4050)
  • mod_private: Add support for XEP-0402 PEP Native Bookmarks
  • mod_privilege: Don’t fail to edit roster (#3942)
  • mod_pubsub: Fix usage of plugins option, which produced default_node_config ignore (#4070)
  • mod_pubsub: Add pubsub_delete_item hook
  • mod_pubsub: Report support of config-node-max in pep
  • mod_pubsub: Relay pubsub iq queries to muc members without using bare jid (#4093)
  • mod_pubsub: Allow pubsub node owner to overwrite items published by other persons
  • mod_push_keepalive: Delay wake_on_start
  • mod_push_keepalive: Don’t let hook crash
  • mod_push: Add notify_on option
  • mod_push: Set last-message-sender to bare JID
  • mod_register_web: Make redirect to page that end with / (#3177)
  • mod_shared_roster_ldap: Don’t crash in get_member_jid on empty output (#3614)

MUC

  • Add support to register nick in a room (#3455)
  • Convert allow_private_message MUC room option to allowpm (#3736)
  • Update xmpp version to send roomconfig_changesubject in disco#info (#4085)
  • Fix crash when loading room from DB older than ffa07c6, 23.04
  • Fix support to retract a MUC room message
  • Don’t always store messages passed through muc_filter_message (#4083)
  • Pass also MUC room retract messages over the muc_filter_message (#3397)
  • Pass MUC room private messages over the muc_filter_message too (#3397)
  • Store the subject author JID, and run muc_filter_message when sending subject (#3397)
  • Remove existing role information for users that are kicked from room (#4035)
  • Expand rule “mucsub subscribers are members in members only rooms” to more places

SQL

  • Add ability to force alternative upsert implementation in mysql
  • Properly parse mysql version even if it doesn’t have type tag
  • Use prepared statement with mysql
  • Add alternate version of mysql upsert
  • ejabberd_auth_sql: Reset scram fields when setting plain password
  • mod_privacy_sql: Fix return values from calculate_diff
  • mod_privacy_sql: Optimize set_list
  • mod_privacy_sql: Use more efficient way to calculate changes in set_privacy_list

Full Changelog

https://github.com/processone/ejabberd/compare/23.04…23.10

ejabberd 23.10 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 23.10 first appeared on ProcessOne.

by Jérôme Sautret at February 27, 2024 14:47

February 22, 2024

JMP

Mobile-friendly Gateway to any SIP Provider

We have for a long time supported the public Cheogram SIP instance, which allows easy interaction between the federated Jabber network and the federated SIP network. When it comes to connecting to the phone network via a SIP provider, however, very few of these providers choose to interact with the federated SIP network at all. It has always been possible to work around this with a self-hosted PBX, but documentation on the best way to do this is scant. We have also heard from some that they would like hosting the gateway themselves to be easier, as increasingly people are familiar with Docker and not with other packaging formats. So, we have sponsored the development of a Docker packaging solution for the full Cheogram SIP solution, including an easy ability to connect to an unfederated SIP server

XMPP Server

First of all, in order to self-host a gateway speaking the XMPP protocol on one side, you’ll need an XMPP server. We suggest Prosody, which is already available from many operating systems. While a full Prosody self-hosting tutorial is out of scope here, the relevant configuration to add looks like this:

Component "asterisk"
    component_secret = "some random secret 1"
    modules_disabled = { "s2s" }
Component "sip"
    component_secret = "some random secret 2"
    modules_disabled = { "s2s" }

Note that, especially if you are going to set the gateway up with access to your private SIP account at some provider, you almost certaintly do not want either of these federated. So no DNS setup is needed, nor do the component names need to be real hostnames. The rest of this guide will assume you’ve used the names here.

If you don’t use Prosody, configuration for most other XMPP servers should be similar.

Run Docker Image

You’ll need to pull the Docker image:

docker pull singpolyma/cheogram-sip:latest

Then run it like this:

docker run -d \
    --network=host \
    -e COMPONENT_DOMAIN=sip \
    -e COMPONENT_SECRET="some random secret 2" \
    -e ASTERISK_COMPONENT_DOMAIN=asterisk \
    -e ASTERISK_COMPONENT_SECRET="some random secret 1" \
    -e SIP_HOST=sip.yourprovider.example.com \
    -e SIP_USER=your_sip_username \
    -e SIP_PASSWORD=your_sip_password \
    -e SIP_JID=your-jabber-id@yourdomain.example.com \
    singpolyma/cheogram-sip:latest

If you just want to connect with the federated SIP network, you can leave off the SIP_HOST, SIP_USER, SIP_PASSWORD, and SIP_JID. If you are using a private SIP provider for connecting to the phone network, then fill in those values with the connection information for your provider, and also your own Jabber ID so it knows where to send calls that come in to that SIP address.

Make a Call

You can now make a call to any federated SIP address at them\40theirdomain.example.com@sip and to any phone number at +15551234567@sip which wil route via your configured SIP provider.

You should even be able to use the dialler in Cheogram Android:

Cheogram Android Dialler Cheogram Android Dialler

Inbound calls will route to your Jabber ID automatically as well.

What About SMS?

Cheogram SIP does have some basic support for SIP MESSAGE protocol, so if your provider has that it may work, but more testing and polish is needed since this is not a very common feature at providers we have tested with.

Where to Learn More

If you have any questions or feedback of any kind, don’t hesistate to stop by the project channel which you can get on the web or using your Jabber ID.

by Stephen Paul Weber at February 22, 2024 17:37

Erlang Solutions

What is Elixir?

What is Elixir: Exploring its Functional Programming Essence

In our latest post, we’ll be exploring Elixir, a robust programming language known for its concurrency and fault-tolerance capabilities. We’ll look at some of Elixir’s syntax, and core features, as well as the Elixir community some resources for beginners and enthusiasts alike. 

The birth of Elixir

As the brainchild of José Valim, Elixir is rooted in Valim’s experiences with Ruby on Rails and Erlang. Elixir aimed to tackle the challenges of building scalable and fault-tolerant applications.

Harnessing the power of the Erlang Virtual Machine (VM), Elixir inherits its renowned traits of low latency, distributed computing, and fault tolerance. This foundation empowers developers to create robust systems capable of handling demanding workloads across diverse industries.

Erlang VM and Virtual Machine Process

Elixir’s versatility extends far beyond its roots. With its powerful tooling and ecosystem, Elixir facilitates productivity in various domains, including web development, embedded software, machine learning, data pipelines, and multimedia processing. Its flexibility and efficiency make it an ideal choice for tackling an array of challenges in today’s tech landscape.

Elixir and Erlang: A powerful duo

Elixir, a robust programming language, collaborates closely with Erlang, renowned for building fault-tolerant and distributed systems. Developed by Ericsson in the late 1980s, Erlang initially targeted telecommunications applications, prioritising reliability and uninterrupted service.

A key element driving the synergy between Elixir and Erlang is the BEAM, a virtual environment proficient in executing code written in both languages. Elixir, uniquely, is constructed directly atop the BEAM, inheriting its capacity for highly concurrent and fault-tolerant runtime operations. This integration fosters seamless interoperability between Elixir and Erlang applications, ensuring optimal performance and reliability

Elixir on BEAM

Elixir benefits significantly from Erlang’s robust framework, leveraging its scalability, fault tolerance, and distributed processing capabilities. This makes Elixir a preferred choice in industries where system uptime is paramount. Additionally, Elixir developers gain access to Erlang’s established ecosystem and libraries, simplifying the development of resilient and scalable systems.

It also introduces contemporary syntax and language features, enhancing developer productivity and code expressiveness. This modernisation, combined with Erlang’s robust runtime, empowers developers to navigate the complexities of today’s software landscape confidently, delivering efficient and reliable solutions.

Furthermore, Elixir introduces contemporary syntax and language features, enhancing developer productivity and code expressiveness. This modernisation, combined with Erlang’s robust runtime, empowers developers to navigate today’s complex software landscape with confidence, delivering efficient and reliable solutions.

Elixir and Erlang are a formidable duo, Complementing each other’s strengths to empower developers in crafting dependable, scalable, and fault-tolerant systems with ease and effectiveness.

Elixir’s syntax and language features

Elixir boasts a clean and expressive syntax inspired by Ruby, with a focus on developer productivity and readability. Its language features are designed to promote conciseness and clarity, making it an ideal choice for both beginners and experienced developers alike.

  • Concurrency with Erlang processes: Elixir utilises lightweight Erlang processes for concurrency. These processes communicate via message passing, facilitating highly concurrent and fault-tolerant systems.
  • Immutable data: Elixir promotes immutability, ensuring that once data is created, it cannot be changed. This simplifies code reasoning and mitigates unexpected side effects.
  • Pattern matching: A core feature, pattern matching allows developers to destructure data and match it against predefined patterns, leading to concise and elegant code.
  • Functions as first-class entities: Functions can be assigned to variables, passed as arguments, and returned from other functions, enabling powerful abstractions and composition.
  • Metaprogramming with macros: Elixir offers metaprogramming capabilities through macros, empowering developers to generate and manipulate code at compile time. This facilitates the creation of domain-specific languages and powerful abstractions.
  • Fault Tolerance via Supervision Trees: Elixir adopts Erlang’s “let it crash” philosophy, isolating processes and containing failures. Supervision trees structure and manage process supervision, ensuring robust fault tolerance.
  • OTP for scalability and reliability: Elixir includes OTP, providing libraries and best practices for building scalable, fault-tolerant, and distributed systems. OTP features such as gen_servers and supervisors enhance system reliability.
  • Comprehensive tooling and documentation: Elixir offers a rich set of tools for development, testing, and deployment. Mix, the build tool, manages dependencies and runs tests, while ExDoc simplifies documentation creation and maintenance.

Understanding functional programming in Elixir

In Elixir, functional programming prioritises pure functions, immutable data, and higher-order functions. It encourages writing code clearly and expressively, treating functions as primary elements that can be passed as arguments or returned as results.

Elixir’s functional programming paradigm supports the development of robust, scalable, and fault-tolerant systems, which makes it an excellent option for creating distributed and concurrent applications.

Robustness:

  • Elixir’s functional approach reduces bugs and promotes code clarity.
  • Problems are solved in smaller, testable units.

Scalability:

  • Elixir’s lightweight processes enable easy concurrency.
  • Systems can scale across multiple cores or nodes effortlessly.

Fault Tolerance:

  • Elixir’s supervision tree ensures system resilience.
  • Failures are isolated and managed, keeping the system running.

Concurrency:

  • Elixir’s processes communicate asynchronously.
  • Concurrent operations are efficient and responsive.

Distribution:

  • Elixir applications can easily scale across multiple nodes.
  • Distributed computing is simplified, enabling high availability.

 Practical applications of Elixir

Elixir’s versatility and robust features make it a powerful language for developing a wide range of applications across different domains. From web development to distributed systems and embedded devices, Elixir’s concurrency, fault tolerance, and scalability enable developers to build resilient and efficient solutions. Here are some practical applications where Elixir shines:

Web development with Phoenix Framework

Phoenix, Elixir’s web framework, offers high-performance solutions for modern web apps. Leveraging Elixir’s concurrency and fault tolerance, Phoenix scales effortlessly to handle concurrent connections and real-time features.

Elixir Phoenix

Distributed systems and microservices

Elixir’s lightweight processes and distribution support make it ideal for distributed systems and microservices. Its fault-tolerant supervision trees ensure system reliability and scalability across multiple nodes.

Embedded systems and IoT

Elixir’s small footprint and low-latency performance suit embedded systems and IoT. With Nerves, developers can deploy Elixir to devices like Raspberry Pi, ensuring fault tolerance and resilience.

Real-time messaging and chat applications

Elixir’s concurrency and real-time support make it perfect for messaging apps. Libraries like Phoenix Channels enable scalable and fault-tolerant chat systems, handling numerous concurrent users seamlessly.

Financial and e-commerce systems

Elixir’s reliability and scalability are beneficial for financial and e-commerce platforms. Its fault-tolerant supervision ensures uninterrupted processing of transactions. Frameworks like Broadway facilitate scalable data processing for large transaction volumes.

What is Elixir used for: Real-world examples

Elixir, with its powerful features and versatile ecosystem, finds applications in a host of real-world scenarios across different industries:

Discord: Community-driven communication

Discord, serving over 150 million monthly active users, relies on Elixir for seamless voice and text chat experiences in gaming and educational communities.

Pinterest: Scalable backend services

Pinterest, with over 450 million monthly active users, employs Elixir for its backend services, handling millions of user interactions and content updates daily.

Deliveroo: Reliable food delivery services

Deliveroo, operating in over 800 cities globally, employs Elixir for its backend systems, ensuring reliable food delivery services for millions of customers worldwide.

Bleacher Report: Real-time sports updates

Bleacher Report delivers real-time sports updates and news to over 40 million monthly active users, leveraging Elixir for efficient data processing and content delivery.

PepsiCo: Supply chain optimisation

PepsiCo, one of the world’s largest food and beverage companies, uses Elixir to optimise its supply chain operations, ensuring efficient distribution of products across its global network.

Scalable web applications with Elixir

When you’re making mobile or web apps, scalability is key. If your app can’t handle more users, you might lose them. Plus, you could miss out on chances for growth. 

Scalability also matters financially. If your app can’t grow smoothly, you’ll end up spending more on infrastructure. That’s where Elixir comes in. It’s a powerful language for building apps that can handle lots of users. Elixir is used in different areas such as gaming and e-commerce. 

It’s like combining the best of OCaml and Haskel languages. With strong tools and a helpful community, Elixir is perfect for making apps that can grow with businesses.

Elixir’s ecosystem and community

Elixir’s success isn’t just attributed to its language features, but also to its vibrant ecosystem and supportive community. With a growing collection of libraries, tools, and resources, Elixir’s ecosystem continues to expand, making it easier for developers to build and maintain Elixir applications.

Tools and libraries enhancing Elixir development

Elixir boasts a rich ecosystem of libraries and tools that cover a wide range of functionalities, from web development and database integration to concurrency and distributed computing. The Phoenix web framework, for instance, provides a robust foundation for building scalable and real-time web applications, while Ecto offers a powerful database abstraction layer for interacting with databases in Elixir applications. Other notable libraries include Broadway for building concurrent and fault-tolerant data processing pipelines, and Nerves for developing embedded systems and IoT applications.

The growing community and learning resources

One of Elixir’s greatest strengths is its supportive and inclusive community. From online forums and chat rooms to local meetups and conferences, Elixir enthusiasts have numerous avenues to connect, learn, and collaborate with fellow developers. 
The Elixir Forum and the Elixir Slack community are popular online hubs where developers can seek help, share knowledge, and discuss best practices.

Additionally, ElixirConf, the annual conference dedicated to Elixir and Erlang, provides a platform for developers to network, attend talks and workshops, and stay up-to-date with the latest developments in the Elixir ecosystem.

Conclusion

Elixir’s unique blend of concurrency, fault tolerance, and scalability makes it a powerful language for modern application development. Its clean syntax and functional programming principles enhance developer productivity and code maintainability.

Looking ahead, Elixir’s future in software development is promising. With the growing demand for distributed systems and data-intensive applications, Elixir’s strengths position it well for continued growth and innovation.

Supported by an active community and ongoing contributions, Elixir is set to play a significant role in shaping the future of software development. Whether you’re a seasoned developer or new to Elixir, its potential is boundless in the dynamic landscape of software engineering.

Further reading and resources

Books on Elixir

  • Programming Elixir” by Dave Thomas: This book offers a comprehensive introduction to Elixir, covering its syntax, features, and best practices for building robust applications.
  • “Elixir in Action” by Saša Jurić: A practical guide to Elixir programming, covering topics such as concurrency, distributed computing, and building scalable applications.
  • “The Little Elixir & OTP Guidebook” by Benjamin Tan Wei Hao: This book provides a gentle introduction to Elixir’s concurrency model and the OTP framework, essential for building fault-tolerant and distributed systems.

Online Tutorials on Elixir

  • Elixir School: A free online resource offering interactive lessons and tutorials on Elixir programming, suitable for beginners and experienced developers alike.

The Complete Elixir and Phoenix Bootcamp

Master Functional Programming techniques with Elixir and Phoenix while learning to build compelling web applications!

Community forums and support for Elixir learners

  • Elixir Forum: A vibrant online community for Elixir enthusiasts to ask questions, share knowledge, and discuss topics related to Elixir programming and ecosystem.
  • Elixir Slack Community: (https://elixir-slackin.herokuapp.com/): Join the Elixir Slack community to connect with fellow developers, ask for help, and engage in discussions on all things Elixir.
  • Reddit r/elixir: The Elixir subreddit is a place to share news, articles, and questions about the Elixir programming language and ecosystem.

These resources provide a solid foundation for learning the Elixir programming language and engaging with the vibrant Elixir community. Whether you’re just starting your journey with Elixir or looking to deepen your knowledge, these books, tutorials, and forums offer valuable insights and support along the way.

The post What is Elixir? appeared first on Erlang Solutions.

by Content Team at February 22, 2024 08:58

February 16, 2024

JMP

Newsletter: JMP is 7 years old — thanks to our awesome community!

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client. Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

Today JMP is 7 years old! We launched on this day in 2017 and a lot has changed since then. In addition to what we talked about in past years (see https://blog.jmp.chat/b/february-newsletter-2022 and https://blog.jmp.chat/b/february-newsletter-2023 for example), in the last year we’ve brought JMP out of beta, launched a data plan, and have continued to grow our huge community of people (channel participants, JMP customers, and many more) excited about communication freedom. So, in light of some vibes from yesterday’s “celebration” in some countries, we’d like to take this opportunity to say: Thank you to everyone involved in JMP, however that may be! You are part of something big and getting bigger! Communication freedom knows no bounds, technically, socially, or geographically. And you make that happen!

Along with this huge community growing, we’ve been growing JMP’s staff as well — we’re now up to 5 employees working hard to build and maintain the foundations of communication freedom every day. We look forward to continuing this growth, in a strong and sustainable way, for years to come.

Lastly, while dates have not been announced yet, we’re excited to say we’ll be back at FOSSY in Portland, Oregon, this year! FOSSY is expected to happen in July and, if last year is any indication, it will be a blast. We’d love to see some of you there!

Thanks again to everyone for helping us get to where we are today. We’re super grateful for all your support!

As always, we’re very open to feedback and would love to hear from you if you have any comments, questions, or otherwise. Feel free to reply (if you got this by email), comment, or find us on any of the following:

Thanks for reading and have a wonderful rest of your week!

by Denver Gingerich at February 16, 2024 02:51

February 14, 2024

Erlang Solutions

Why Elixir is the Programming Language You Should Learn in 2024

In this article, we’ll explain why learning Elixir is an ideal way to advance your growth as a developer in 2024. What factors should you consider when deciding to learn a new programming language? 

Well, it typically depends on your project or career goals. Ideally, you’d want a language that:

  • Is enjoyable and straightforward to use
  • Can meet the needs of modern users
  • Can offer promising career prospects
  • Has an active and supportive community
  • Provides a range of useful tools
  • Supports full-stack development through frameworks
  • Offers easily accessible documentation
  • Helps you grow as a programmer

This article will explore how Elixir stacks up against these criteria. 

Elixir is fun and easy to use

Elixir is fun and very user-friendly, which is an important long-term consideration. Its syntax bears a striking resemblance to Ruby. It’s clean and intuitive, making coding simple.

When it comes to concepts like pattern matching and immutable data, they become your trusted allies and simplify your work. You’re also surrounded by a supportive and vibrant community, so you’re never alone in your journey. Whether you’re building web apps, handling real-time tasks, or just experimenting, Elixir makes programming enjoyable and straightforward, without any unnecessary complexity.

How Elixir can meet modern usage demands

Elixir’s strength in handling massive spikes in user traffic is unparalleled, thanks to its foundation on the BEAM VM, designed explicitly for concurrency.

BEAM Scheduler

While digital transformation brings about increased pressure on systems to accommodate billions of concurrent users, Elixir stands out as a reliable solution. For those curious about concurrency and its workings, our blog compares the JVM and BEAM VM, offering insightful explanations. 

Major players like Pinterest and Bleacher Report have recognised the scalability benefits of Elixir, with Bleacher Report, for instance, reducing its server count from 150 to just 5. 

This not only streamlines infrastructure but also enhances performance, allowing them to manage higher traffic volumes with faster response times. The appeal of a language that delivers scalability and fault tolerance is great for navigating the demands of today’s digital landscape.

Elixir’s rewarding career progression

Embarking on a career in Elixir programming promises an exciting journey filled with learning and progress. As the demand for its developers rises, opportunities for growth blossom across various industries. Mastering Elixir’s unique mix of functional programming and concurrency equips developers with sought-after skills applicable to a wide range of projects, from building websites to crafting complex systems. Plus, with more and more companies, big and small, embracing the programming language. As developers dive deeper into Elixir and gain hands-on experience, they pave the way for a rewarding career path filled with growth and success.

When Elixir first emerged, its community was small, as expected with any new technology. But now, it’s thriving! Exciting events like ElixirConf in Europe and the US, EMPEX, Code Elixir LDN, Gig City Elixir, and Meetups worldwide contribute to this vibrant community. 

This growth means the language is always evolving with new tools, and there’s always someone ready to offer inspiration or a helping hand when tackling a problem.

Elixir’s range of useful tooling

Tooling makes languages more versatile and tasks easier, saving you from reinventing the wheel each time you tackle a new problem. Elixir comes equipped with a range of robust tools:

  • Phoenix LiveView: Enables developers to build real-time, front-end web applications without JavaScript.
  • Crawly: Simplifies web crawling and data scraping tasks in Elixir.
  • Ecto: A database wrapper and query generator for Elixir, designed for building composable queries and interacting with databases.
  • ExUnit: Elixir’s built-in testing framework provides a clean syntax for writing tests and running them in parallel for efficient testing.
  • Mix: Elixir’s build tool, which automates tasks such as compiling code, managing dependencies, and running tests.
  • Dialyzer: A static analysis tool for identifying type discrepancies and errors in Erlang and Elixir code, helping to catch bugs early in the development process.
  • ExDoc: A documentation generator for Elixir projects, which generates HTML documentation from code comments and annotations, making it easy to create and maintain project documentation.

Elixir frameworks allow for full-stack development

Given its scalability performance and its origins in Erlang, it is no surprise that Elixir is a popular backend choice. As mentioned above, Phoenix LiveView has provided an easy, time-efficient and elegant way for Elixir developers to produce front-end applications. 
Also, the Nerves framework allows for embedded software development on the hardware end. As a result, this is a language that can be adopted throughout the tech stack. This doesn’t just make it an attractive choice for businesses; it also opens up the door for where the language can take you as a developer.

Elixir’s easily accessible documentation

In a community that values good documentation, sharing what you know is easy. Elixir is all about that – they take their docs seriously, which makes learning the language easy. And it isn’t just about learning – everyone can jump in and help make those docs even better. It’s like a big conversation where everyone’s invited to share and improve together.

Learning Elixir can make you a better programmer in other languages

Many developers transitioning from object-oriented languages have shared their experiences of how learning Elixir has enhanced their programming skills in their main languages. When you dive into a new purely functional programming style like Elixir, it makes you rethink how you code. It’s like shining a light on your programming habits and opening your mind to fresh ways of solving problems. This newfound perspective sticks with you, no matter what language you’re coding in next. And if you’re a Ruby fan– Elixir’s syntax feels like home, making the switch to functional, concurrent programming super smooth.

While everyone has their reasons for picking a programming language, these are some pretty solid reasons to give Elixir a try in 2024 and beyond.

Ready to get started in Elixir?

Getting started is simple.

Begin by visiting the official “Getting Started” page. Additionally, you’ll find a host of free downloadable packages from our team at Erlang Solutions, available for Elixir.

To immerse yourself in the community, ElixirForum is an excellent starting point. You can also explore discussions using the #Elixirlang and #MyElixirStatus hashtags on Twitter.

Curious to learn more about what we do with the Elixir language? Keep exploring!

The post Why Elixir is the Programming Language You Should Learn in 2024 appeared first on Erlang Solutions.

by Content Team at February 14, 2024 15:25

February 08, 2024

Erlang Solutions

A Match Made in Heaven – Transactional Systems and Erlang/Elixir

Transactional systems implemented with Domain Driven Design are not largely complex; they do, however, face a very critical challenge: managing an influx of real-time data while maintaining system reliability and responsiveness. The core issue lies in organising and updating vast amounts of data in real-time coupled with handling intermittent but significant spikes in user traffic. The stakes are high; any delay in updating any information could prove disastrous in that it could cost both credibility and revenue. 

In response to these demands, Elixir stands out as a strategic solution due to its exceptional capabilities in managing high-volume, real-time data processing courtesy of the Erlang virtual machine – BEAM. Stale data is not just costly but also exploitable by end-users which underscores the importance of a system’s capability to scale effortlessly and handle sudden intense loads without compromising performance or reliability – qualities entrenched in Elixir since day one.

Real-time responsiveness: Maintaining the pulse of live events

Elixir, rooted in the robust framework of Erlang, was originally designed for developing telephony applications which demand swift responses within milliseconds, consistently and reliably. This characteristic perfectly aligns with the demands of transactional systems where instantaneous updates during pivotal moments are crucial. Elixir’s natural ability to manage these constant updates while maintaining minimal latency becomes a critical asset in such an environment. The Grand National, an event notorious for overwhelming bookmakers every year, is a perfect example where Elixir’s real-time responsiveness can shine due to the avalanche of transactions occurring simultaneously in a very small window.


Handling such monumental traffic volumes presents a significant challenge in sports betting systems. High-profile events produce an overwhelming amount of transactions which necessitates a system capable of managing these surges without falling to its knees. Enter Elixir/Erlang, both of which are well known for their proficiency in handling surges without flinching. Discord, the instant messaging giant, exemplifies Elixir’s ability to handle escalating demands as they were successful in scaling to accommodate 5 million concurrent users with millions of events per second, vividly showcasing Elixir’s prowess.

Concurrency and fault tolerance: Preventing disruptions in user experience

Elixir/Erlang was built from a foundation designed for concurrency, simplifying the creation of concurrent systems with a fundamental emphasis on isolation, and fault tolerance. Its architecture revolves around processes that operate independently and communicate solely via message passing without sharing state. This feature ensures that processes will not interfere with one another, enabling individual processes to be monitored and revived in the event of failure. 

In the context of a transactional system, having a single process to manage each user interaction means any issues with one process remains contained and does not affect the rest, therefore, the system keeps running smoothly. This approach prevents the unfortunate situation where a solitary user’s problem could otherwise impact the entire platform, thereby preserving user trust and system integrity amid surges in usage. 

The phrase “let it crash” is very common among Elixir/Erlang developers, it’s not to disregard the significance of system crashes or errors; rather, it signifies the resilience of the system. Crashes or errors are often confined to within their respective processes, thereby averting a domino effect that could bring down the entire system. Recovery from these isolated incidents is often as straightforward as restarting the affected processes and that might be all that is required to get the processes up and running again, there are several strategies for managing how process failures are handled so there is some flexibility afforded to developers in that realm.

Scalability: Seamlessly adapting to growing demands

The concurrency model implemented by Erlang/Elixir allows for seamless vertical scaling to accommodate escalating demands without compromising service quality. As the resource requirements increase within a node, the system spawns more processes which ensures a consistent and reliable service even during rapidly increasing user loads. The scalability has been validated by the industry giant Bet365 as they were able to seamlessly increase users supported on a single node from tens to hundreds of thousands.

Bleacher Report, the second largest sports website in the world, is another success story as they were able to handle 8 times their normal traffic without autoscaling, all the while using 8 serves in comparison to the 150 they were using before they used Elixir.

While concurrency isn’t unique to Elixir and Erlang, their strength lies in leveraging the power of the BEAM virtual machine – a time-tested, battle-hardened system designed explicitly for concurrent, fault-tolerant, and real-time applications. As surely as the sun rises, the synergy between Elixir/Erlang and the BEAM virtual machine embodies a level of reliability and resilience that will not falter. 

Conclusion

In conclusion, companies that require highly transactional systems call for a platform that is capable of managing extreme data loads, ensuring constant responsiveness, and remaining resilient in the face of unpredictability. Erlang and Elixir, through their unique strengths and support from the BEAM, stand out as the ideal solution, not just for sports betting but for any industry facing similar demands for reliability, scalability, and real-time processing. 

The post A Match Made in Heaven – Transactional Systems and Erlang/Elixir appeared first on Erlang Solutions.

by Lee Sigauke at February 08, 2024 06:59

February 02, 2024

Mathieu Pasquet

slixmpp v1.8.5

Highlights

  • Moving away from self-hosted gitlab (mathieui)
  • Fix connection to Snikket instances (pep., mathieui)
  • Performance fix for XEP-0115 queries
  • New documentation listing projects using slixmpp (genghis)
  • Bugfix and improvements (nicoco, mostly)

Details

  • Gitlab migration: see the other blogpost
  • Fix connections to Snikket instances:

Snikket decided to forbid PLAIN authentication, which is good but exposed a bug in slixmpp, which was trying to do SCRAM-SHA-1-PLUS authentication on TLSv1.3 using the tls-unique channel binding, which is forbidden by spec on this version of TLS as it has various known attacks. TLSv1.3 has the tls-exporter binding which replaces tls-unique, but we cannot currently use it in slixmpp because CPython does not support it. For now, connections to Snikket instances will use SCRAM-SHA-1 without binding (note that the stanzas may say SCRAM-SHA-1-PLUS, but it is the SCRAM payload which is important here).

  • Performance fix for XEP-0115 (Entity Capabilities):

previously, when receiving the same hash many times, while it was not in cache, slixmpp would fire tons of similar requests at the same time, which would predictably yield the same result. Nicoco made a fix, tested it in Slidge and upstreamed it, which will greatly improve the situation.

  • Documentation:

New contributor genghis has taken the task of adding a page listing various projects and bots that use slixmpp, both for their own visibility and to give more examples of projects using slixmpp.

  • Bugfixes:

nicoco has made various improvements to XEP plugins used in Slidge, such as XEP-0356, XEP-0428, XEP-0461, or XEP-0313 plugins. sxavier added helpful documentation and example to the XEP-0221 plugin, and Daniel Roschka fixed an issue where repeatedly calling connect() would wipe the previously set connection parameters.

Thanks to all new and returning contributors and maintainers for this release. It can be found on codeberg.

by mathieui at February 02, 2024 01:00

The XMPP Standards Foundation

The XMPP Newsletter December 2023 & January 2024

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of December 2023 & January 2024. After a winter break we are back - a wonderful happy new year 2024 still! Many thanks to all our readers and all contributors!

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

XSF Announcements

Happy Birthday, Jabber!

On 4th January 2024 the announcement of Jeremie Miller turns 25 and with it what would become the initiation, development and propagation of XMPP until today!

Join the endeavor for the next 25 years!

Happy Birthday!

XSF Membership

If you are interest to join the XMPP Standards Foundation, please apply now.

XMPP Summit 26 & FOSDEM 2024

The XSF is holding the 26th XMPP Summit, which is to take place on February 1st & 2nd 2024 in Brussels (Belgium, Europe). Following the Summit, the XSF is also present at FOSDEM 2024, which takes place on February 3rd & 4th 2024. Find all the details in our Wiki. Please sign-up now if you are planning to attend, since this helps organizing. The event is of course open for everyone interested to participate. Spread the word within your circles!

XMPP and Google Summer of Code 2024

The XSF has been applying as a hosting organisation at GSoC in 2024 again. If you are interested, please reach out!

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

Talks

  • XMPP Italian Happy Hour Podcast [IT]: Dive into the world of XMPP with the Italian Happy Hour podcast, a monthly event derived from recorded video sessions. Each episode is dedicated to the XMPP protocol, offering insights and discussions from enthusiasts and professionals within the community. Whether you’re commuting, working out, or simply seeking to listen to interesting conversation, this podcast delivers the essence of Italian XMPP gatherings directly to your ears. Tune in at XMPP Italian Happy Hour Podcast or subscribe to the RSS feed to never miss an episode. Fediverse: @xmpphappyhour@open.audio.
  • RFC 9420 or how to scale end-to-end encryption with Messaging Layer Security (MLS)

Articles

Software News

Clients and Applications

Servers

Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • PubSub Server Information
    • This document defines a data format whereby basic information of an XMPP domain can be expressed and exposed over pub-sub.
  • Host Meta 2 - One Method To Rule Them All
    • This document defines an XMPP Extension Protocol for extending XEP-0156 by modifying the JSON Web Host Metadata Link format to support discovering all possible XMPP connection methods, for c2s and s2s

New

  • Version 0.1.0 of XEP-0484 (Fast Authentication Streamlining Tokens)
    • This specification defines a token-based method to streamline authentication in XMPP, allowing fully authenticated stream establishment within a single round-trip. Promoted to Experimental. (XEP Editor: kis)
  • Version 0.1.0 of XEP-0483 (HTTP Online Meetings)
    • This specification defines a protocol extension to request URLs from an external HTTP entity usable to initiate and invite participants to an online meeting. Promoted to Experimental. (XEP Editor: kis)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No last call this month.

Stable

  • No XEP moved to stable this month.

Deprecated

  • No XEP deprecated this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

February 02, 2024 00:00

February 01, 2024

Erlang Solutions

What Is the Fastest Programming Language? Making the Case for Elixir

In the realm of technology, speed isn’t merely a single factor; it’s a constant way of life. Developers frequently find themselves needing to rethink solutions overnight, underscoring the importance of being able to swiftly modify code. This agility has become indispensable in modern development, especially when evaluating the fastest programming language.

Because of this, finding the right language is a recurring obstacle for both developers and business owners. Regardless of your use case, Elixir consulting can be one proven way to harness one of the fastest programming language options available today. 

But defining what “the fastest programming language” means in the context of development can be just as complicated. To better understand adaptability and speed in coding languages, we’ve outlined how this should be determined, alongside some of the leading trends that continue to disrupt the concept of fast programming at present. 

What determines a programming language’s speed?

Several factors go into determining which programming language is the fastest. It’s first important to note that the quality of your code, and the skill of the programmer behind it, matters more than the specific language you’re using. This is why it’s crucial to work with talented, experienced developers well-versed in their respective languages.

However, there are factors which impact how efficiently a code can be implemented. One example is multi-threading, or concurrency. Concurrency means you’re able to perform multiple complicated tasks at once; languages with this capacity are therefore often more versatile, and faster, as a result. 

Another core way in which languages differ in terms of speed is whether they’re compiled or interpreted languages.

Compiled vs interpreted languages

All programming languages are written in human-readable code and then translated into machine-readable code so they can be executed. The way this information transfer occurs can however have a big impact on both flexibility and speed.

Interpreted languages are read through an interpreter which then translates the code. Conversely, compiled languages allow the machine to directly understand code without an interpreter.

A simplified way of thinking about this is to see interpreted languages as a conversation between two people who speak different languages, with an interpreter translating between them. Meanwhile, compiled languages are more like a conversation between two people who speak the same language.

In practice, this means compiled languages can be executed faster than interpreted languages because they don’t require a translation step.

Compiled v interpreted language

It also means programmers can be more flexible when using compiled languages, as they have more control over areas like CPU usage.

Is Elixir one of the fastest programming language options?

Elixir is a compiled language, which means it has several efficiency benefits when compared with interpreted languages like Python and JavaScript, among others.

Elixir programming is also a process that was initially designed with concurrency in mind. This means programmers can easily use multi-threading, allowing them to build complex solutions more effectively. Elixir’s benefits also extend to fault tolerance; whilst not directly improving speed, the ability to keep systems functional makes solutions more reliable and allows developers to solve problems in a targeted way.
When combining these features with Elixir’s scalability, it becomes one of the fastest programming language options available to developers today.

Top contenders for the fastest programming language

In the present day, a plethora of programming languages are available for use, with developers continually innovating and introducing new ones. The effectiveness of a programming language often hinges on its design, usability, efficiency, and applicability.

It’s essential to grasp the factors influencing the performance of a programming language. Parameters such as execution speed, memory utilisation, and adeptness in managing intricate tasks are pivotal considerations for developers assessing language proficiency.

That said, let’s delve into the contenders.

Python: Versatility and speed

Python is a widely used programming language that is great for building highly scalable websites for users: 

Readability and simplicity: Python boasts a syntax engineered for readability and ease of comprehension, prioritisng code clarity and maintainability. Its straightforward and intuitive structure allows developers to articulate concepts concisely.

Abundant libraries and frameworks: Python boasts a rich ecosystem of libraries and frameworks that streamline various web development tasks. 

Thriving community: Backed by a thriving and expansive community of developers, Python experiences continual growth and support. 

Scalability and performance: Python garners acclaim for its scalability and performance, allowing it to manage high-volume web applications. 

Integration and compatibility: Python seamlessly integrates with various technologies, affording flexibility in web development endeavours. 

Swift: The speed of Apple’s innovation

Swift in mobile app development

Central to iOS app development is Swift, Apple’s robust and user-friendly programming language. The goal of Swift app development was simplification. Swift’s succinct and expressive syntax empowers developers to craft code that is both cleaner and easier to maintain. 

The main drivers behind its increasing popularity are:

Benefits of SWIFT Language for iOS Development 

Enhanced syntax and readability: Swift boasts a concise syntax, making it easy to understand and work with. 

Reduced maintenance: Swift streamlines the coding process and operates independently of other programming databases, leading to high efficiency.

Minimised error probability: With Swift, the likelihood of coding or compiling errors is significantly decreased. It emphasizes safety and security.

Interactive playground: The Swift Playground feature enables developers to experiment with coding algorithms without having to complete the entire app, enhancing creativity and coding speed.

High performance: Swift excels in speed compared to other programming languages, resulting in lower developmental costs.
Open source: Swift is freely available and allows for extensive customisation based on individual needs.

Ruby: Quick development and easy syntax

Ruby on Rails for web applications

Ruby on Rails (or Rails) is known for its capacity to streamline web development, Rails emphasises efficiency, enabling developers to achieve more with less code compared to many other frameworks.

Building apps quickly and easily: Rails focuses on quick prototyping and iterative development. This approach minimises bugs, enhances adaptability, and makes the Rails application code more intuitive.

Open-source libraries: Ruby on Rails has plenty of ready-made libraries available. These libraries enable you to enhance your web application without starting from scratch. The supportive Rails community often improves these tools, making them more accessible and valuable, with ample community support on platforms like GitHub.

Simple Model View Controller (MVC): Long-time fans of Ruby on Rails swear by the MVC architecture. Thanks to MVC, it’s incredibly time-efficient for Rails developers to create and maintain web applications.

Reliable testing environment: Rails applications come with three default environments: production, development, and test. These environments are defined in a simple configuration file. Having separate tests and data for testing ensures that it won’t interfere with the actual development or production database.

Flexible code modification and migration: Ruby on Rails has flexibility in modifying and migrating code. Migration allows you to define changes in your database structure, making it possible to use a version control system to keep things in sync. This flexibility is great for scalability and cost-effectiveness because you don’t have to overhaul your source code when migrating to another platform.

Kotlin: A modern approach to speed

Kotlin in Android development

Kotlin is a versatile programming language that works on various platforms. It meets Android app development requirements, especially since it’s a supported language for crafting Android app code.

Kotlin: The official programming language for Android

Streamlined Android app development: Kotlin presents a more efficient approach to creating Android apps, with a compact library that keeps method counts low.

Simplified code and enhanced readability: Kotlin shortens code and improves readability, reducing errors and expediting coding processes.

Open-source advantage: Being open-source ensures consistent support from the Kotlin Slack team, fostering high-quality development.

Ease of learning: Kotlin proves to be a user-friendly language for beginners, with easily understandable code that empowers developers to solve problems creatively and effectively.

Increased productivity and accelerated development: Adopting Kotlin leads to heightened productivity and faster development. Safety features like null safety reduce bug occurrences, resulting in quicker debugging and maintenance.

Java: A balanced blend of speed and functionality

Java in enterprise solutions

Java’s “write once, run anywhere” capability makes it a top choice for enterprise software development, offering extensive support across diverse platforms and operating systems. 

This feature enables developers to write code once and execute it across various environments, resulting in significant time and cost savings while minimizing maintenance requirements. In the realm of IT, Java’s cross-platform compatibility ensures seamless operation across platforms like Windows, Mac OS, and Linux, making it particularly well-suited for enterprise needs.

Security: Paramount in enterprise applications, and Java’s architecture offers robust security features to protect both data and applications, ensuring the integrity of business operations.

Multithreading: Java’s multithreaded environment enhances performance by enabling faster response times, smoother operations, and efficient management of multiple requests simultaneously. This not only boosts productivity but also reduces development challenges for enterprise applications handling numerous threads.

Simplicity to use: The simplicity and flexibility of Java coding, coupled with its user-friendly interface, streamline the development process. Additionally, Java’s reusable code promotes efficiency, allowing enterprises to leverage existing codebases for developing new software applications while ensuring ease of maintenance.

Stability: Renowned for its stability, Java stands as one of the most reliable programming languages, capable of managing errors without compromising the entire application. This stability fosters trust among companies seeking a dependable language to deliver a seamless customer experience.

Availability of libraries: Java’s vast library support empowers developers with a plethora of resources to address various challenges and fulfil specific functionalities, further enhancing its appeal for enterprise development projects.

Comparing speeds: Fastest languages programming

From powering high-performance applications to ensuring swift response times in web services, the programming language used can significantly impact the efficiency and effectiveness of a project. In this exploration of programming languages. Let’s uncover the strengths and capabilities of each language in delivering optimal performance across diverse domains. 

C++: The powerhouse of performance

C++ in game and system development

In gaming, where milliseconds matter, C++ allows developers to fine-tune performance for smooth gameplay and stunning graphics. Similarly, in system programming tasks like operating system development, C++’s speed and efficiency ensure responsiveness and reliability.

C#: Versatility in the .NET framework

C# in desktop and web services

C# shines in desktop and web service development, offering a balance of speed and versatility within the .NET framework. 

While not as low-level as C++, it excels in building responsive desktop applications and powerful web services. With features like just-in-time compilation and memory management, C# enables developers to create applications that perform well and scale seamlessly, whether on the desktop or in the cloud.

Lesser-known speed demons

Exploring languages like Assembly, Lisp, and Go

Beyond the mainstream languages, there are lesser-known options that excel in terms of speed. Assembly, known for its direct hardware manipulation, is a go-to choice for projects requiring maximum performance, such as embedded systems and real-time applications. Lisp, with its powerful macro system, allows developers to optimise code for specific tasks, resulting in highly efficient programs. Go, a relatively newer language, offers simplicity and built-in concurrency features, making it ideal for tasks demanding speed and scalability.

JavaScript and PHP: Dominating the web

Scripting languages in web development

JavaScript and PHP have become foundational in web development, powering a vast majority of websites and web applications. Despite their scripting nature, they have evolved to deliver impressive speed and performance, driving innovation on the web. JavaScript’s advancements in browser technology, including just-in-time compilation, have elevated its performance to near-native levels, enabling the creation of complex client-side applications. Similarly, PHP has evolved into a robust platform for server-side web development, with features like opcode caching and asynchronous processing enhancing its speed and scalability. Together, JavaScript and PHP form the backbone of the web, enabling dynamic and interactive experiences for users worldwide.

The future of fast programming

As with all facets of technology, the nature of fast programming is evolving every day. Several trends and innovations are set to transform the concept of efficiency in programming in the coming months.

Emerging trends in programming speed

Compiled languages remain more efficient than interpreted languages in general, but this gap is steadily closing. This is thanks to what’s known as “just-in-time compilation”, also known as dynamic compilation, which is a method designed to improve efficiency in interpreted languages.

Open source development is another important trend when considering how the fastest programming language argument will evolve. These are situations where code is made freely available to everyone so that developers can learn collaboratively. Open source plays a key role in improving programming speeds across the industry, as it means all developers have access to new methods that can be studied and standardised. Languages with larger open source communities may therefore become more efficient over time.

Both low-code and no-code programming have also become more prominent in recent years. These approaches are no substitute for fully coded applications created by experienced developers, but they do evidence the continued focus on speed and efficiency gains in software development today.

Innovations and future predictions

At the moment, AI’s role in programming is mostly speculative. But as the technology evolves, both AI and machine learning may further disrupt the efficiency potential of programming languages. 

One common prediction is for AI to be able to automate some of the more repetitive coding tasks, by analysing coding patterns and then generating short lines of code. In theory, this will reduce the time programmers spend on repetitive tasks, allowing them to experiment and focus on more detailed parts of programming. AI simply isn’t reliable enough to provide this level of support across the profession yet, but that may change in the coming years.

Speed in programming isn’t simply about developing initial builds quickly, it also concerns the ability to scale at speed. Scalability potential in programming languages will therefore continue to play a pivotal role in their selection for advanced systems in the future.

Finally, coding practices designed to streamline and automate the process of programming, like implementing CLIs (command-line interpreters), will continue to play a role in programming speed gains. Being versatile is already a key part of a programmer’s job description, but being able to write efficient, lean code will likely grow in importance as speed and scalability both remain core priorities.

Choosing the fastest programming language for your needs

Determining which programming language is the fastest is dependent on your individual use case. If you’re looking to create a web solution, for example, you’d need to be specifically looking for the fastest web programming language.
If you’re working with complex, distributed systems that need a high level of fault tolerance and the ability to scale, Elixir is the ideal language to work with. Find out more about its efficiency potential on our Elixir page, or by contacting our team directly.

The post What Is the Fastest Programming Language? Making the Case for Elixir appeared first on Erlang Solutions.

by Content Team at February 01, 2024 10:33