Planet Jabber

March 29, 2023

Erlang Solutions

Cómo depurar tu RabbitMQ

Descubre las herramientas y métodos adecuados para la depuración de RabbitMQ.

Lo que aprenderás en este blog.

Nuestros clientes de consultoría de RabbitMQ provienen de una amplia gama de industrias. Como resultado, hemos visto casi todos los comportamientos inesperados que puede presentar. RabbitMQ es un software complejo que emplea concurrencia y cómputo distribuido (a través de Erlang), por lo que depurarlo no siempre es sencillo. Para llegar a la causa raíz de un comportamiento inesperado (y no deseado), necesitas las herramientas adecuadas y la metodología adecuada. En este artículo, demostraremos ambas para ayudarte a aprender la técnica de depuración en RabbitMQ.

El problema de depurar RabbitMQ.

La inspiración para este blog proviene de un ejemplo real. Uno de nuestros clientes tenía la API HTTP de administración de RabbitMQ proporcionando información crucial a su sistema. El sistema dependía mucho de la API, específicamente del endpoint /api/queues  porque el sistema necesitaba saber el número de mensajes listos en cada cola en un clúster de RabbitMQ. El problema era que a veces una solicitud HTTP al endpoint duraba hasta decenas de segundos (en el peor de los casos, ni siquiera podían obtener una respuesta de la API).

Entonces, ¿qué causó que algunas solicitudes tomaran tanto tiempo? Para responder a esa pregunta, intentamos reproducir el problema a través de pruebas de carga.

Ejecutando pruebas de carga.

Utilizamos una plataforma que creamos para MongooseIM para ejecutar nuestras Pruebas de Carga Continuas. Aquí están algunos de los aspectos más importantes de la plataforma:

  1. todos los servicios involucrados en una prueba de carga se ejecutan dentro de contenedores de docker
  2. la carga es generada por Amoc; es una herramienta de código abierto escrita en Erlang para generar cargas masivamente paralelas de cualquier tipo (AMQP en nuestro caso)
  3. se recopilan métricas del sistema en prueba y del sitio de Amoc para un análisis posterior.

El diagrama a continuación representa una arquitectura lógica de una prueba de carga de ejemplo con RabbitMQ:

En el diagrama, el lado izquierdo muestra un clúster de nodos de Amoc que emulan clientes AMQP que, a su vez, generan la carga contra RabbitMQ. Por otro lado, podemos ver un clúster de RabbitMQ que sirve a los clientes AMQP. Todas las métricas de los servicios de Amoc y RabbitMQ se recopilan y almacenan en una base de datos InfluxDB.

Consultas lentas de Management HTTP API

Intentamos reproducir las consultas lentas a Management HTTP API en nuestras pruebas de carga. El escenario de prueba fue bastante sencillo. Un grupo de editores publicaba mensajes en el intercambio predeterminado. Los mensajes de cada editor se dirigían a una cola dedicada (cada editor tenía una cola dedicada). También había consumidores conectados a cada cola. La replicación de cola estaba habilitada.

Para valores concretos, consulte la tabla a continuación:

Esa configuración estresó los servidores Rabbit en nuestra infraestructura. Como se ve en los gráficos a continuación:

Cada nodo de RabbitMQ consumió alrededor de 6 (de 7) núcleos de CPU y aproximadamente 1,4 GB de RAM, excepto rabbitmq-1 que consumió significativamente más que los otros. Eso se debió probablemente a que tuvo que atender más solicitudes de Management HTTP API que los otros dos nodos.

Durante la prueba de carga, se consultó el endpoint /api/queues  cada dos segundos para obtener la lista de todas las colas junto con los valores correspondientes de messages_ready . Una consulta se veía así:

http://rabbitmq-1:15672/api/queues?columns=name,messages_ready

Aquí están los resultados de la prueba:

La figura anterior muestra el tiempo de consulta durante una prueba de carga. Está claro que las cosas son muy lentas. La mediana es de 1,5 segundos mientras que los percentiles 95, 99, 999 y máx. llegan a 20 segundos.

Debugging

Una vez confirmado el problema y puede reproducirse, estamos listos para comenzar a depurar. La primera idea fue encontrar la función Erlang que se llama cuando llega una solicitud a la API de administración HTTP de RabbitMQ y determinar dónde esa función pasa su tiempo de ejecución. Si pudiéramos hacer esto, nos permitiría localizar el código más costoso detrás de la API.

Encontrar la función de entrada

Para encontrar la función que estábamos buscando, tomamos los siguientes pasos:

  1. buscamos en el complemento de administración de RabbitMQ para encontrar la asignación adecuada de “ruta HTTP a función”,
  2. usamos la función de rastreo de Erlang para verificar si se llama a una función encontrada cuando llega una solicitud.

El complemento de administración utiliza cowboy (un servidor HTTP de Erlang) debajo para servir las solicitudes de API. Cada punto final de HTTP requiere un módulo de devolución de llamada de cowboy, por lo que encontramos fácilmente la función rabbit_mgmt_wm_queues:to_json/2 que parecía manejar las solicitudes que llegaban a /api/queues. Confirmamos eso con el rastreo (usando una biblioteca de recuperación que se envía con RabbitMQ por defecto).

root@rmq-test-rabbitmq-1:/rabbitmq_server-v3.7.9# erl -remsh rabbit@rmq-test-rabbitmq-1 -sname test2 -setcookie rabbit  
Erlang/OTP 21 [erts-10.1] [source] [64-bit] [smp:22:7] [ds:22:7:10] [async-threads:1]  

Eshell V10.1  (abort with ^G)  
(rabbit@rmq-test-rabbitmq-1)1> recon_trace:calls({rabbit_mgmt_wm_queues, to_json, 2}, 1).  
1  

11:0:48.464423 <0.1294.15> rabbit_mgmt_wm_queues:to_json(#{bindings => #{},body_length => 0,cert => undefined,charset => undefined,  
  has_body => false,  
  headers =>  
      #{<<"accept">> => <<"*/*">>,  
        <<"authorization">> => <<"Basic Z3Vlc3Q6Z3Vlc3Q=">>,  
        <<"host">> => <<"10.100.10.140:53553">>,  
        <<"user-agent">> => <<"curl/7.54.0">>},  
  host => <<"10.100.10.140">>,host_info => undefined,  
  media_type => {<<"application">>,<<"json">>,[]},  
  method => <<"GET">>,path => <<"/api/queues">>,path_info => undefined,  
  peer => {{10,100,10,4},54136},  
  pid => <0.1293.15>,port => 53553,qs => <<"columns=name,messages_ready">>,  
  ref => rabbit_web_dispatch_sup_15672,  
  resp_headers =>  
      #{<<"content-security-policy">> => <<"default-src 'self'">>,  
        <<"content-type">> => [<<"application">>,<<"/">>,<<"json">>,<<>>],  
        <<"vary">> =>  
            [<<"accept">>,  
             [<<", ">>,<<"accept-encoding">>],  
             [<<", ">>,<<"origin">>]]},  
  scheme => <<"http">>,  
  sock => {{172,17,0,4},15672},  
  streamid => 1,version => 'HTTP/1.1'}, {context,{user,<<"guest">>,  
               [administrator],  
               [{rabbit_auth_backend_internal,none}]},  
         <<"guest">>,undefined})  
Recon tracer rate limit tripped. 

El fragmento anterior muestra que primero habilitamos el seguimiento para rabbit_mgmt_wm_queues:to_json/2, luego enviamos manualmente una solicitud a la API de administración (usando curl; no visible en el fragmento) y que generó el evento de seguimiento. Así es como encontramos nuestro punto de entrada para un análisis más profundo.

Usando flame graphs

Una vez que hemos encontrado una función que sirve las solicitudes, ahora podemos verificar cómo esa función pasa su tiempo de ejecución. La técnica ideal para hacer esto es Flame Graphs. Una de sus definiciones establece que:

Los gráficos de llamas son una visualización del software perfilado, lo que permite identificar rápidamente y con precisión los caminos de código más frecuentes.

En nuestro caso, podríamos usar gráficos de llamas para visualizar la pila de llamadas de la función o, en otras palabras, qué funciones se llaman dentro de una función rastreada y cuánto tiempo tarda (en relación con el tiempo de ejecución de la función rastreada) para que estas funciones se ejecuten. Esta visualización ayuda a identificar rápidamente las funciones sospechosas en el código.

Para Erlang, existe una biblioteca llamada eflame que tiene herramientas tanto para: recopilar trazas del sistema Erlang como para construir un gráfico de llamas a partir de los datos. ¿Pero cómo inyectamos esa biblioteca en Rabbit para nuestra prueba de carga?

Construyendo una imagen personalizada de Docker para RabbitMQ

Como mencionamos anteriormente, todos los servicios de nuestra plataforma de pruebas de carga se ejecutan dentro de contenedores Docker. Por lo tanto, tuvimos que construir una imagen personalizada de Docker para RabbitMQ con la biblioteca eflame incluida en el código del servidor. Creamos un repositorio de RabbitMQ-docker que hace que sea fácil construir una imagen de Docker con el código fuente de RabbitMQ modificado.

Perfilando con eflame

Una vez que tuvimos una imagen de Docker de RabbitMQ modificada con eflame incluido, pudimos ejecutar otra prueba de carga (las especificaciones eran las mismas que en la prueba anterior) y comenzar el perfilado real. Estos fueron los resultados:

Realizamos varias mediciones y obtuvimos dos tipos de resultados como se presentan arriba. La principal diferencia entre estos gráficos se encuentra en la función rabbit_mgmt_util:run_run_augmentation/2. ¿Qué significa esa diferencia?

A partir de los resultados de las pruebas de carga anteriores y del análisis manual del código, sabemos que existen consultas lentas y rápidas. Las solicitudes lentas pueden tardar hasta veinte segundos, mientras que las rápidas solo tardan unos pocos segundos. Esto confirma el gráfico de tiempo de consulta anterior con un percentil del 50 de alrededor de 1,5 segundos, pero el 95 (y porcentajes más altos) que llegan hasta 20 segundos. Además, medimos manualmente el tiempo de ejecución de ambos casos utilizando timer:tc/3 y los resultados fueron consistentes.

Esto sucede porque hay una caché en el plugin de Management. Cuando la caché es válida, las solicitudes se sirven mucho más rápido ya que los datos ya se han recopilado, pero cuando es inválida, es necesario recopilar toda la información necesaria.

A pesar de que los gráficos tienen la misma longitud en la imagen, representan diferentes tiempos de ejecución (rápido vs lento). Por lo tanto, es difícil adivinar qué gráfico muestra qué consulta sin tomar realmente una medición. El primer gráfico muestra una consulta rápida mientras que el segundo muestra una consulta lenta. En el gráfico de consulta lenta  rabbit_mgmt_util:augment/2 -> rabbit_mgmt_db:submit_cached/4 -> gen_server:call/3 -> … la pila tarda tanto tiempo porque la caché es inválida y se necesita recopilar datos nuevos. Entonces, ¿qué sucede cuando se recopilan los datos?

Perfiles con fprof

Podrías preguntar: “¿por qué no vemos las funciones de recopilación de datos en los gráficos de llama?” Esto sucede porque la caché se implementa como otro proceso de Erlang y la recopilación de datos ocurre dentro del proceso de caché. Hay una función gen_server:call/3  visible en los gráficos que hace una llamada al proceso de caché y espera una respuesta. Dependiendo del estado de la caché (válido o inválido), una respuesta puede volver rápidamente o lentamente.

La recopilación de datos se implementa en la función  rabbit_mgmt_db:list_queue_stats/3  que se invoca desde el proceso de caché. Naturalmente, deberíamos perfilar esa función. Probamos con eflame y después de varias docenas de minutos, este es el resultado que obtuvimos:

eheap_alloc: Cannot allocate 42116020480 bytes of memory (of type "old_heap").

El asignador de memoria del montón de Erlang intentó asignar 42 GB de memoria (de hecho, se necesitaba espacio para que el recolector de basura operara) y se bloqueó el servidor. Como eflame aprovecha el seguimiento de Erlang para generar gráficos de llama, probablemente se sobrecargó con la cantidad de eventos de seguimiento generados por la función rastreada. Ahí es donde entra en juego fprof.

Según la documentación oficial de Erlang, fprof es:

una herramienta de perfilado de tiempo que utiliza el seguimiento de archivo para un impacto mínimo en el rendimiento en tiempo de ejecución.

Esto es muy cierto. La herramienta manejó la función de recopilación de datos sin problemas, aunque tardó varios minutos en producir el resultado. La salida fue bastante grande, así que solo se enumeran las líneas cruciales a continuación:

(rabbit@rmq-test-rabbitmq-1)96> fprof:apply(rabbit_mgmt_db, list_queue_stats, [RA, B, 5000]).  
...
(rabbit@rmq-test-rabbitmq-1)97> fprof:profile().  
...
(rabbit@rmq-test-rabbitmq-1)98> fprof:analyse().  
...
%                                       CNT        ACC       OWN  
{[{{rabbit_mgmt_db,'-list_queue_stats/3-lc$^1/1-1-',4}, 803,391175.593,  105.666}],  
 { {rabbit_mgmt_db,queue_stats,3},              803,391175.593,  105.666},     %  
 [{{rabbit_mgmt_db,format_range,4},            3212,390985.427,   76.758},  
  {{rabbit_mgmt_db,pick_range,2},              3212,   58.047,   34.206},  
  {{erlang,'++',2},                            2407,   19.445,   19.445},  
  {{rabbit_mgmt_db,message_stats,1},            803,    7.040,    7.040}]}.  

El resultado consiste en muchas de estas entradas. La función marcada con el carácter % es la que concierne a la entrada actual. Las funciones que se encuentran debajo son las que se llamaron desde la función marcada. La tercera columna (ACC) muestra el tiempo de ejecución total de la función marcada (tiempo de ejecución propio de la función y de los que la llaman) en milisegundos. Por ejemplo, en la entrada anterior, el tiempo de ejecución total de la función  rabbit_mgmt_db:pick_range/2 es de 58.047 ms. Para obtener una explicación detallada de la salida de fprof, consulte la documentación oficial de fprof.

La entrada anterior es la entrada de nivel superior que concierne a rabbit_mgmt_db:queue_stats/3, que se llamó desde la función rastreada. Esa función gastó la mayor parte de su tiempo de ejecución en la función rabbit_mgmt_db:format_range/4. Podemos ir a una entrada que concierna a esa función y comprobar en qué gastó su tiempo de ejecución. De esta manera, podemos revisar la salida y encontrar posibles causas del problema de lentitud de la API de gestión.

Al leer la salida de fprof de arriba hacia abajo, llegamos a esta entrada:

{[{{exometer_slide,'-sum/5-anonymous-6-',7},   3713,364774.737,  206.874}],
 { {exometer_slide,to_normalized_list,6},      3713,364774.737,  206.874},     %
 [{{exometer_slide,create_normalized_lookup,4},3713,213922.287,   64.599}, %% SUSPICIOUS
  {{exometer_slide,'-to_normalized_list/6-lists^foldl/2-4-',3},3713,145165.626,   51.991}, %% SUSPICIOUS
  {{exometer_slide,to_list_from,3},            3713, 4518.772,  201.682},
  {{lists,seq,3},                              3713,  837.788,   35.720},
  {{erlang,'++',2},                            3712,   70.038,   70.038},
  {{exometer_slide,'-sum/5-anonymous-5-',1},   3713,   51.971,   25.739},
  {garbage_collect,                               1,    1.269,    1.269},
  {suspend,                                       2,    0.151,    0.000}]}.  

La entrada se refiere a la función exometer_slide:to_normalized_list/6 que a su vez llamó a dos funciones “sospechosas” del mismo módulo. Profundizando encontramos esto:

{[{{exometer_slide,'-create_normalized_lookup/4-anonymous-2-',5},347962,196916.209,35453.182},
  {{exometer_slide,'-sum/5-anonymous-4-',2},   356109,16625.240, 4471.993},
  {{orddict,update,4},                         20268881,    0.000,172352.980}],
 { {orddict,update,4},                         20972952,213541.449,212278.155},     %
 [{suspend,                                    9301,  682.033,    0.000},
  {{exometer_slide,'-sum/5-anonymous-3-',2},   31204,  420.574,  227.727},
  {garbage_collect,                              99,  160.687,  160.687},
  {{orddict,update,4},                         20268881,    0.000,172352.980}]}. 

and

   {[{{exometer_slide,'-to_normalized_list/6-anonymous-5-',3},456669,133229.862, 3043.145},
  {{orddict,find,2},                           19369215,    0.000,129761.708}],
 { {orddict,find,2},                           19825884,133229.862,132804.853},     %
 [{suspend,                                    4754,  392.064,    0.000},
  {garbage_collect,                              22,   33.195,   33.195},
  {{orddict,find,2},                           19369215,    0.000,129761.708}]}.  

Gran parte del tiempo de ejecución fue consumido por las funciones  orddict:update/4 y orddict:find/2 . Ambas funciones combinadas representaron el 86% del tiempo total de ejecución.

Esto nos llevó al módulo exometer_slide del plugin RabbitMQ Management Agent. Si se examina el módulo, se encontrarán todas las funciones mencionadas y las conexiones entre ellas.

Decidimos cerrar la investigación en esta etapa porque este era claramente el problema. Ahora que hemos compartido nuestras reflexiones sobre el problema con la comunidad en este blog, quién sabe, tal vez encontraremos una nueva solución juntos.

El efecto observador

Hay una última cosa que es esencial considerar cuando se trata de depurar/observar sistemas: el efecto observador. El efecto observador es una teoría que afirma que si estamos monitoreando algún tipo de fenómeno, el proceso de observación cambia ese fenómeno.

En nuestro ejemplo, utilizamos herramientas que se aprovechan del rastreo. El rastreo tiene un impacto en un sistema ya que genera, envía y procesa muchos eventos.

Los tiempos de ejecución de las funciones mencionadas anteriormente aumentaron considerablemente cuando se llamaron con el perfilado habilitado. Las llamadas puras tomaron varios segundos mientras que las llamadas con el perfilado habilitado tomaron varios minutos. Sin embargo, la diferencia entre las consultas lentas y rápidas pareció permanecer sin cambios.

El efecto observador no fue evaluado en el alcance del experimento descrito en esta publicación de blog.

Una solución alternativa

El problema puede ser resuelto de una manera ligeramente diferente. Pensemos por un momento si hay otra manera de obtener los nombres de las colas correspondientes a la cantidad de mensajes en ellas. Existe la función rabbit_amqqueue:emit_info_all/5 que nos permite recuperar la información exacta que nos interesa, directamente desde un proceso de cola. Podríamos utilizar esa API desde un plugin personalizado de RabbitMQ y exponer un punto final HTTP para enviar esos datos cuando se consulten.

Convertimos esa idea en realidad y construimos un plugin de prueba de concepto llamado rabbitmq-queue-info que hace exactamente lo que se describe arriba. Incluso se realizó una prueba de carga del plugin (la especificación de la prueba fue exactamente la misma que la del plugin de gestión, como se mencionó anteriormente en el blog). Los resultados se muestran a continuación y hablan por sí solos:

¿Quieren más?

¿Quiere saber más sobre el rastreo en RabbitMQ, Erlang y Elixir? Consulte WombatOAM, un sistema intuitivo que facilita la supervisión y el mantenimiento de sus sistemas. Obtenga su prueba gratuita de 45 días de WombatOAM ahora.

Apéndice

La versión 3.7.9 de RabbitMQ se utilizó en todas las pruebas de carga mencionadas en esta publicación de blog. Un agradecimiento especial a Szymon Mentel y Andrzej Teleżyński por toda la ayuda con esa publicación.

Nuestro trabajo con RabbitMQ.

The post Cómo depurar tu RabbitMQ appeared first on Erlang Solutions.

by Erlang Admin at March 29, 2023 11:08

March 28, 2023

Erlang Solutions

Here’s Why You Should Build Scalable and Concurrent Applications with Elixir

In today’s world, when dealing with high levels of system requests, you need applications that can handle them without slowing down. Here’s where Elixir comes in. Elixir is a programming language that is designed to create highly scalable and concurrent applications. Built on Erlang’s virtual machine (BEAM), it has been used for decades to build highly reliable and scalable systems. 

Keep reading and I’ll explain what makes Elixir so useful for businesses and the benefits there are to such a scalable system.

A bit of background on Elixir

Elixir was created in 2012 by Ruby developer Jose Valim. The Ruby programming language had long been considered the standard for developing enterprise apps because it is well-built and has a great framework. But Ruby was built at a time when we didn’t have the same system demands compared to now. Today, applications often run into issues with concurrency and scaling up applications. 

Valim wanted to enable higher extensibility and productivity for use in building large-scale sites and apps. For this, he turned to the older Erlang programming language. Erlang was built as a telecom solution with massive concurrency and the ability to handle millions of phone call connections. Building on top of Erlang and combining all the benefits of Ruby, led to the high-concurrency, low-latency language we know today. Elixir is now used by a wide variety of companies, including Discord, Pinterest, Moz, and more.

Why businesses are adopting Elixir

So why are businesses making the switch?

Elixir-based development produces services that can handle substantially more traffic. You’ll have a platform that can expand and scale swiftly without compromising dependability, all while enhancing overall performance. More customers, more sales, and a higher return on investment (ROI) have proven to be big benefits.

But don’t just take out word for it, have a look at some of our clients who are thriving after moving their systems.

Building scalable and concurrent applications with Elixir

Scalability and concurrency are crucial aspects of modern-day applications. With Elixir, you can build applications that can handle a large number of requests, without compromising performance. 

Its ability to run multiple processes concurrently enables developers to build highly scalable applications. The concurrency model also allows developers to create lightweight processes that can communicate with each other seamlessly. Elixir also provides a distributed environment, which allows developers to build applications that can scale horizontally -ideal for accommodating rapid business growth.

More about the Actor Model

Elixir’s concurrency model is based on the Actor model, which provides a message-passing system between processes.

Source: Lightbend

The “Actor Model” is for doing many things at the same time. It works by using actors as the basic building blocks. Think of them as little machines that can do things independently of each other, who talk to each other by sending messages. Each of these little machines are called “processes”.

This way of working makes it easy to build systems that can handle multiple things happening at once, even when issues occur.

Leveraging Elixir’s ecosystem for scalable and concurrent applications

Elixir has a vast ecosystem of libraries and frameworks that can help developers build scalable and concurrent applications. One of the most popular frameworks is Phoenix. It provides features such as real-time communication, web sockets, and channels, which make it an ideal choice for building scalable and concurrent web applications. 

Elixir also has libraries such as GenServer, which provides a simple and powerful way to build concurrent applications.

Other ecosystems also include Mix, a build tool that automates many of the tasks involved in creating Elixir applications. Mix provides tasks for creating new projects, testing, and deploying applications. Mix is also extensible, allowing developers to create their own tasks and plugins.

Fault-Tolerance

Elixir’s supervisor mechanism allows developers to build fault-tolerant applications that can recover from failures automatically. Elixir’s processes are isolated from each other, which means that if a process fails, it does not affect the entire system. Developers can also use Elixir’s built-in error handling mechanisms to handle errors gracefully.

Fault tolerance systems. Source: Finematics 

Elixir is easy to learn

A major draw to Elixir also lies in its simplicity. It has a simple, easy to learn syntax hat is again, a big plus for developers. It is also a productive language, it can accomplish a lot with just minimal code.

The Elixir community

Despite the relative newness of Elixir when compared to other languages, the fast growing Elixir community is very supportive, continually creating libraries and code to remain solid and robust. 

The Elixir revolution

As digital transformation continues to reinvent business models, Elixir has become a growing choice for businesses looking for ways to differentiate themselves in complex technology markets.

We are now in the age where companies are eager to find cutting-edge technologies that will revolutionise how users interact with their applications. If you’re looking to build scalable and concurrent applications, Elixir is definitely worth considering.If you’d like to learn more about Elixir, check out our page.

The post Here’s Why You Should Build Scalable and Concurrent Applications with Elixir appeared first on Erlang Solutions.

by Cara May-Cole at March 28, 2023 06:16

March 22, 2023

Alexander Gnauck

XmppDotNet announcement

I want to announce the availability of the XmppDotNet XMPP library. XmppDotNet is the new name and next generation of our MatriX vNext XMPP library.

Why changing the name?

It was never intended to keep vNext in the name forever. And there is a lot of confusion between MatriX and MatriX vNext at some of our existing customers. Most of them expect both libraries to be fully API compatible. Or they expect to have a very simple upgrade path.
But this was never case, and there are no plans to publish API compatible wrappers or migration tools.
The MatriX XMPP library development started over 2 decades ago as agsXMPP. XMPP was known as Jabber these days. While Jabber/XMPP evolved a lot over the years the same applies also to the underlying .NET technologies.

Most of the code in XmppDotNet is rewritten, the API and architecture is completely redesigned. It targets .NET Core only. While many legacy protocols and extensions are still implemented and working the focus is on modern XMPP and its extensions.

The license is currently GPL v3. But there are plans to switch XmppDotNet to a less restrictive license in the future.

by gnauck at March 22, 2023 14:15

March 18, 2023

Ignite Realtime Blog

Release v1.1.0 of the MUC Real-Time Block List plugin for Openfire

We are happy to announce the immediate availability of a new version of the MUC Real-Time Block List plugin for Openfire, our cross-platform real-time collaboration server based on the XMPP protocol! This plugin can help you moderate your chat rooms, especially when your service is part of a larger network of federated XMPP domains.

From experience, the XMPP community has learned that bad actors tend to spam a wide range of public chat rooms on an equally wide range of different domains. Prior to the functionality provided by this plugin, the administrator of each MUC service had to manually adjust permissions, to keep unwanted entities out. With this new plugin, that process is automated.

In this new release, several small bugs were fixed, and new features were introduced, including:

  • The plugin now, by default, uses a block list as maintained on https://xmppbl.org/
  • Support for blocking full domains (rather than just individual users) has been added
  • Block list entries no longer disappear over time

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page.

For other release announcements and news follow us on Twitter and Mastodon.

1 post - 1 participant

Read full topic

by guus at March 18, 2023 10:00

March 16, 2023

Erlang Solutions

Here’s Why You Should Build Scalable Systems with Erlang

Building systems in the earlier days of the internet used to be pretty simple.

While the system was admittedly pretty limited, the demand to scale past one or two servers wasn’t particularly high. But upon entering the 21st century, we saw large companies (think Amazon, Starbucks, Yahoo) and many more find the need to scale not just a few servers, but thousands. Even tens of thousands. Suddenly, the old-school system was impractical and nearly impossible to scale past one of two servers.

The need for a system that offers scalability, flexibility and resilience had arrived. Enter Erlang- the powerful programming language designed for building highly scalable, fault-tolerant systems. 

Wondering what benefits there are to a scalable system like Erlang? Keep reading. We’ll be breaking down those very basics in this blog.

A bit of background on Erlang

But first, a bit of history of the Erlang language.

Erlang was developed in the 1980s by Ericsson. Since then, it has been used to build large-scale distributed systems, such as telecom switching systems, online gaming platforms, and social networking sites. 

So, what is a scalable system?

Before we start discussing scalable systems, let’s see what is really meant by the term ‘scaleable’. 

Measuring scalability is the ability to measure a system’s ability to increase or decrease in cost and performance, in response to the changes in a system’s demands.

Now, it may seem obvious that an application being used by one user would require different levels of technology than one being used by a hundred. Yet, the reality is that there are still many businesses using technology that does not allow for this flexibility. This often leads to companies having to invest more money in creating software from scratch whenever they grow. 

As digital transformation drives accelerated business growth, businesses of all sizes need to be able to scale operations and adapt to their rapidly changing environments quickly. It’s no surprise that scalability has become an increasingly important factor when dealing with developing applications. Businesses have no choice but to be scalable, or they will face becoming overwhelmed when usage increases and will eventually become unable to meet the demands of a growing user base.

A scalable computer language such as Erlang can write large new programmes and extend large existing ones relatively pain-free, depending on the complexity of the size of the programme it is trying to manage. 

Concurrency and parallelism in Erlang

Erlang does a lot of things differently, one of those things being concurrency. When compared to most other programming languages that treat concurrency as an afterthought, Erlang builds in concurrency from the very base of the system.

It was designed from the ground up to support concurrency and parallelism. 

Illustrating concurrency and parallelism on a 2-core CPU. Source: OpenClassrooms 2020. https://devopedia.org/concurrency-vs-parallelism

Erlang’s lightweight processes, also known as actors, can execute in parallel, and they communicate with each other by exchanging messages. This message-passing model makes it easy to build highly concurrent systems that can handle a large number of users.

Fault tolerance (Let it Crash)

The philosophy behind Erlang is simply ‘Let it Crash.’

Sound odd, right? Actually, letting it crash isn’t about crashing for the user or system. 

It’s about containing failure and letting Erlang clean it up.

It knows that errors will happen, and things will break. 

Instead of trying to simply guard against those errors, Erlang has a built-in mechanism to handle those errors and failures. 

These mechanisms allow guarding against errors. So when a process crashes, it can restart automatically. In turn, the system restarts quickly and continues to operate smoothly.

OTP

The Open Telecom Platform (OTP) is a set of tools, frameworks and principles that are designed to guide and support the deployment of Erlang systems. 

OTP includes a wide range of components, such as a supervision tree, process registry, and message queues, which can be used to build complex distributed systems.

Supervision tree example https://www.erlang.org/doc/design_principles/des_princ.htm

Focusing on the supervision tree is a key hierarchical structuring model that is based on the idea of workers and supervisors, which makes it possible to design and programme fault-tolerant software.

The workers are processes that perform computations meaning, they do the actual work. And supervisors are processes that monitor the behaviour of those workers. A supervisor can restart a worker if something goes wrong.

Using OTP in your projects will help you to avoid accidental complexity.

Distributed systems

Erlang was designed for building distributed systems. It has built-in support for building systems that span multiple nodes. Erlang’s distribution mechanism allows processes to communicate with each other across the network, making it simple to build systems that scale horizontally.

Erlang is high- performance

Erlang is an incredibly high-performing language that can handle a large number of concurrent users and has great resilience over high task loads. 

Well known for its low latency, it is well-suited for building systems that require real-time processing and also has a small memory footprint, which enables it to run efficiently on low-end hardware.

Hot Code Loading

Erlang has a unique feature known as hot code loading, which enables developers to update their systems without shutting them down. 

Another way to think of it is that Hot Code Loading is the art of replacing an engine from a running car, without ever having to stop the car itself. It can update the code without causing any disruption to the service, meaning zero impact on users.

This feature is particularly useful for building systems that need to be available 24/7.

Scalability

Last but not least, Erlang’s scalability is next to none. This language can be used to build systems that can handle millions of users. Erlang’s concurrency model and distributed architecture make it easy to build systems that can scale horizontally across multiple nodes, allowing developers to handle increasing loads, without sacrificing performance.

To conclude

Overall, Erlang is a great choice for building large-scale distributed systems that need to be highly available and performant. It handles concurrency and all its complexities with robustness and ease. But don’t just take our word for it. There are thousands of companies across the globe that have enlisted Erlang beyond its early days in telecoms. 

But don’t just take our word for it. Here are some of the clients who have felt the real-life impact of Erlang on their businesses.

Fancy finding more out about Erlang? Check out our page.

The post Here’s Why You Should Build Scalable Systems with Erlang appeared first on Erlang Solutions.

by Cara May-Cole at March 16, 2023 10:00

March 14, 2023

Ignite Realtime Blog

Developing Openfire Efficient XML Interchange (EXI) functionality

We am excited to announce that a new plugin for the Openfire real time collaboration server is in the works! This plugin implements Efficient XML Interchange (EXI) functionality and provides an XMPP implementation of EXI as defined in XEP-0322.

Efficient XML Interchange (EXI) is a binary XML format for exchange of data on a computer network. It is one of the most prominent efforts to encode XML documents in a binary data format, rather than plain text. Using EXI format reduces the verbosity of XML documents as well as the cost of parsing.

EXI is useful for:

  • a complete range of XML document sizes, from dozens of bytes to terabytes
  • reducing computational overhead to speed up parsing of compressed documents
  • increasing endurance of small devices by utilizing efficient decompression

Read more about EXI in its Wikipedia article (where the above definition was taken from).

The plugin that we’re developing today was first created by Javier Placencio, in 2013 and 2014. In 2023, that now dormant project was forked by the Ignite Realtime community.

Work on the plugin is progressing steadily. Most of the core functionality is believed to be ready. In preparation for the official release of the plugin, we are looking for opportunities to perform interoperability testing. So far, testing has been done with our own mock client implementations. To be able to release a fully functional plugin, we’d like to test against implementations of other authors. Development builds of the plugin can be downloaded from the Openfire EXI plugin archive page.

Are you interested in this? Please reach out to us on the Ignite Realtime Community, or stop by the open chat! We would love to hear from you!

For other release announcements and news follow us on Twitter and Mastodon.

1 post - 1 participant

Read full topic

by guus at March 14, 2023 19:39

March 13, 2023

Erlang Solutions

Presentamos el soporte de transmisión en RabbitMQ

¿Quiere saber más sobre el soporte de transmisión en RabbitMQ? Arnaud Cogoluègnes, ingeniero de personal de VMware, desglosa todo lo que hay que saber en la Cumbre RabbitMQ de 2021.

En julio de 2021, se introdujeron streams a RabbitMQ, utilizando un nuevo protocolo extremadamente rápido que se puede utilizar junto con AMQP 0.9.1. Los streams ofrecen una forma más fácil de resolver varios problemas en RabbitMQ, incluyendo grandes fan-outs, replay y time travel, y grandes logs, todo con un rendimiento muy alto (1 millón de mensajes por segundo en un clúster de 3 nodos). Arnaud Cogoluègnes, Ingeniero de Staff en VMware, presentó los streams y cómo se utilizan mejor.

Esta charla fue grabada en el RabbitMQ Summit 2021. La 4ta edición del RabbitMQ Summit se llevará a cabo como un evento híbrido, tanto en persona (en el lugar CodeNode en Londres) como virtual, el 16 de septiembre de 2022 y reunirá a algunas de las mayores empresas del mundo que utilizan RabbitMQ, todas en un solo lugar.

Streams: Un Nuevo Tipo de Estructura de Datos en RabbitMQ

Streams son una nueva estructura de datos en RabbitMQ que abren un mundo de posibilidades para nuevos casos de uso. Modelan un registro de solo agregado, lo que representa un gran cambio respecto a las colas tradicionales de RabbitMQ, ya que tienen semántica de consumidor no destructiva. Esto significa que cuando se leen mensajes de un Stream, no se eliminan, mientras que en las colas, cuando se lee un mensaje de una cola, se destruye. Este comportamiento reutilizable de RabbitMQ Streams se facilita mediante la estructura de registro de solo agregado.

Text from the image:(Streams: a un nuevo tipo de estructura de datos en RabbitMQ)
(Modela registros de solo anexar) (Persistente y replicado)(semántica de cliente no destructiva)(AMQP 0.9.1 y protocolo nuevo)

RabbitMQ también introdujo un nuevo protocolo, el protocolo Stream, que permite un flujo de mensajes mucho más rápido. Sin embargo, también puedes acceder a Streams a través del protocolo AMQP 0.9.1 tradicional, que sigue siendo el protocolo más utilizado en RabbitMQ. También son accesibles a través de otros protocolos que RabbitMQ soporta, como MQTT y STOMP.

Fortalezas de Streams

Los Streams tienen fortalezas únicas que les permiten destacar en algunos casos de uso. Estas incluyen: 

Difusión masiva

Cuando tienes varias aplicaciones en tu sistema que necesitan leer los mismos mensajes, tienes una arquitectura de difusión masiva. Los Streams son excelentes para las difusiones masivas, gracias a sus semánticas de consumo no destructivas, eliminando la necesidad de copiar el mensaje dentro de RabbitMQ tantas veces como haya consumidores.

Reproducción y viaje en el tiempo

Los Streams también ofrecen capacidades de reproducción y viaje en el tiempo. Los consumidores pueden adjuntarse en cualquier lugar de un Stream, utilizando un desplazamiento absoluto o una marca de tiempo, y pueden leer y volver a leer los mismos datos tantas veces como sea necesario.

Rendimiento

Gracias al nuevo protocolo de stream, los streams tienen el potencial de ser significativamente más rápidos que las colas tradicionales. Si necesitas un alto rendimiento o estás trabajando con mensajes grandes, los streams a menudo pueden ser una opción adecuada.

Mensajes grandes

Los Streams también son buenos para grandes registros. Los mensajes en los streams siempre son persistentes en el sistema de archivos, y los mensajes no permanecen en la memoria por mucho tiempo. Al consumirse, se utiliza la caché de archivos del sistema operativo para permitir un flujo de mensajes rápido.

RabbitMQ also introduced a new protocol, the Stream protocol, which allows much faster message flow, however, you can access Streams through the traditional AMQP 0.9.1 protocol as well, which remains the most used protocol in RabbitMQ. They are also accessible through the other protocols that RabbitMQ supports, such as MQTT and STOMP.  

Text from the image:(Grandes fan-outs) (Repetición/tiempo de viaje)(Alto rendimiento)(Grandes registros)

La Abstracción del Log

Un stream es inmutable, puedes añadir mensajes, pero una vez que un mensaje ha entrado en el stream, no se puede eliminar. Esto hace que la abstracción del log del stream sea una estructura de datos bastante simple en comparación con las colas donde los mensajes siempre se añaden y se eliminan. Esto nos lleva a otro concepto importante, el offset. El offset es simplemente un índice técnico de un mensaje dentro del stream, o una marca de tiempo. Los consumidores pueden indicar a RabbitMQ que empiece a leer desde un offset en lugar del principio del stream. Esto permite una fácil reproducción y viaje en el tiempo de los mensajes. Los consumidores también pueden delegar la responsabilidad de seguimiento del offset a RabbitMQ.

Text from the image:
(La abstracción de registros) 
(Un modelo de secuencias y un registro de solo anexar)

(Estructura de datos FIFO)
(Lectura no destructiva)

(Mensaje más viejo) (Compensar)(Último mensaje)(Proximo mensaje iría aquí en su lugar)

Podemos tener cualquier cantidad de consumidores en un stream, no compiten entre sí, una aplicación consumidora no robará mensajes de otras aplicaciones, y la misma aplicación puede leer el flujo de mensajes muchas veces.

Las colas pueden almacenar mensajes en memoria o en disco, pueden estar en un solo nodo o estar replicadas, los streams son persistentes y replicados en todo momento. Cuando creamos un stream, tendrá un líder ubicado en un nodo y réplicas en otros nodos. Las réplicas seguirán al líder y sincronizarán los datos. El líder es el único que puede crear operaciones de escritura y las réplicas solo se utilizarán para servir a los consumidores.

Colas de RabbitMQ vs. Streams

Los streams están aquí para complementar las colas y ampliar los casos de uso de RabbitMQ. Las colas tradicionales siguen siendo la mejor herramienta para los casos de uso más comunes en RabbitMQ, pero tienen sus limitaciones, hay momentos en los que no son la mejor opción.

Los streams son, al igual que las colas, una estructura de datos FIFO, es decir, el mensaje más antiguo publicado se leerá primero. Proporcionar un desplazamiento permite al cliente omitir el comienzo del flujo, pero los mensajes se leerán en el orden de publicación.

En RabbitMQ, tiene una cola tradicional con un par de mensajes y una aplicación consumidora. Después de registrar el consumidor, el intermediario comenzará a despachar mensajes al cliente y la aplicación puede comenzar a procesarlos.

Cuando, en este punto, el mensaje está en un punto importante de su vida útil, está presente en el lado del remitente y también en el lado del consumidor. El intermediario todavía necesita preocuparse por el mensaje porque puede ser rechazado y debe saber que aún no se ha reconocido. Después de que la aplicación termine de procesar el mensaje, puede reconocerlo y, a partir de este momento, el intermediario puede deshacerse del mensaje y considerarlo procesado. Esto es lo que podemos llamar consumo destructivo, y es el comportamiento de las colas clásicas y de cuórum. Al usar Streams, el mensaje permanece en el Stream mientras la política de retención lo permita.

Implementar configuraciones de gran difusión masiva con RabbitMQ no era óptimo antes de Streams. Cuando entra un mensaje, va a un intercambio y se enruta a una cola. Si desea que otra aplicación procese los mensajes, debe crear una nueva cola, vincular la cola al intercambio y comenzar a consumir. Este proceso crea una copia del mensaje para cada aplicación, y si necesita que otra aplicación procese los mismos mensajes, debe repetir el proceso; entonces otra cola, un nuevo enlace, un nuevo consumidor y una nueva copia del mensaje.

Este método funciona y se ha utilizado durante años, pero no escala de manera elegante cuando se tienen muchas aplicaciones consumidoras. Los streams proporcionan una mejor manera de implementar esto, ya que los mensajes pueden ser leídos por cada consumidor por separado, en orden, desde el Stream.

Rendimiento de los streams de RabbitMQ usando AMQP y el protocolo de Stream

Como se explica en la charla, hubo un mayor rendimiento con Streams en comparación con las colas de cuórum.

Obtuvieron alrededor de 40,000 mensajes por segundo con las Colas Quórum y 64,000 mensajes por segundo con Streams. Esto se debe a que Streams son una estructura de datos más simple que las Colas Quórum, ya que no tienen que lidiar con cosas complicadas como la confirmación de mensajes, mensajes rechazados o reencolado.

Text from the image: 
(Streams en AMQP)
(Cluster de 3 nodos (instancias c2-standard-16))
(Tarifas de publicación)
(mensajes/segundos)

(Colas Quorum)(Stream en AMQP)

Las colas de Quorum siguen siendo colas replicadas y persistentes de vanguardia, mientras que las Streams son para otros casos de uso. Al usar el protocolo Stream dedicado, se pueden lograr tasas de transferencia de un millón de mensajes por segundo.

Text from the image: (Protocolo Stream)
(Cluster de 3 nodos (instancias c2-standard-16))
(Tarifas de publicación)
(mensajes/segundos)
(Colas Quorum)
(Stream en AMQP)
(Stream con protocolo de stream)

El Protocolo Stream ha sido diseñado teniendo en cuenta el rendimiento y utiliza técnicas de bajo nivel como la API libC sendfile, la caché de página del sistema operativo y el agrupamiento, lo que lo hace más rápido que las colas AMQP.

El plugin RabbitMQ Stream y los clientes

Los Streams están disponibles a través de un nuevo plugin en la distribución principal. Cuando está activado, RabbitMQ comenzará a escuchar en un nuevo puerto que puede ser utilizado por los clientes que comprenden el Protocolo Stream. Está integrado con la infraestructura existente en RabbitMQ, como la interfaz de gestión, la API REST y Prometheus.

Text from the image: (Los Streams son también accesibles a través de un nuevo protocolo)
(Rápido) (Complemento en la distribución principal)(integración de gestión)

Hay un cliente dedicado en Java y Go que utiliza este nuevo protocolo de flujo. El cliente de Java es la implementación de referencia. También está disponible una herramienta de prueba de rendimiento. Los clientes para otros lenguajes también son desarrollados activamente por la comunidad y el equipo central.

El protocolo de flujo es un poco más simple que AMQP; no hay enrutamiento; simplemente se publica en un flujo, no hay intercambio involucrado, y se consume de un flujo como de una cola. No se necesita lógica para decidir dónde se debe dirigir el mensaje. Cuando publicas un mensaje desde tus aplicaciones cliente, este va a la red y casi directamente al almacenamiento.

Existe una excelente interoperabilidad entre los flujos y el resto de RabbitMQ. Los mensajes se pueden consumir desde una aplicación cliente AMQP 0.9.1 y también funciona en sentido contrario.

Ejemplo de caso de uso para interoperabilidad:

Las colas y los flujos viven en el mismo espacio de nombres en RabbitMQ, por lo que se puede especificar el nombre del flujo del que se desea consumir utilizando los clientes AMQP habituales y mediante el parámetro x-stream-offset para basicConsume.

Es muy fácil publicar con clientes AMQP porque es lo mismo que con las colas, se publica en un intercambio.

Text from the image: (Agregar Stream para Analiticas)
(Editor)(Cola) (Procesando AMER)
(Editor)(Cola) (Procesando EMEA)
(Editor)(Cola) (Procesando APAC)
             (Cola) (Analiticas globales)
((Posibilidad) editores multiprotocolos)

Arriba se muestra un ejemplo de cómo se puede imaginar el uso de streams. Se tiene un publicador que publica mensajes en un exchange y, según la clave de enrutamiento de los mensajes, se enrutan a diferentes colas. Por lo tanto, se tiene una cola para cada región del mundo. Por ejemplo, se tiene una cola para las Américas, una para Europa, una para Asia y una para la sede. Se tiene una aplicación consumidora dedicada que realizará un procesamiento específico para cada región.

Si se actualiza a RabbitMQ 3.9 o posterior, se puede simplemente crear un stream, vincularlo al exchange con un comodín para que todos los mensajes se enruten a las colas pero el stream reciba todos los mensajes. Luego se puede dirigir una aplicación que utiliza el Protocolo Stream a este stream y podemos imaginar que esta aplicación realizará análisis mundiales todos los días sin siquiera leer el stream muy rápido. Así es como podemos imaginar que los streams se ajustan a las aplicaciones existentes.

Garantías para RabbitMQ Streams

Los streams admiten entrega al menos una vez, ya que admiten un mecanismo similar a AMQP Publish Confirms. También hay un mecanismo de deduplicación, el agente filtra los mensajes duplicados según el número de secuencia de publicación, como una clave en una base de datos o un número de línea en un archivo.

Text from the image: (Garantías)                              (Mensajes de deduplicación)                (Control de flujo)
(Al menos uno)                        (publicando)
(Sin pérdida de mensajes)

En ambos lados, tenemos control de flujo, por lo que se bloquearán las conexiones TCP de los editores rápidos. El corredor solo enviará mensajes al cliente cuando esté listo para aceptarlos.

Resumen

Text from the image: (Streams: una nueva estructura de tipo de registro replicada y persistente en RabbitMQ)
(Desbloquear los nuevos escenarios con RabbitMQ)
(Grandes fan-outs) (Repetición/tiempo de viaje)(Alto rendimiento)(Grandes registros)

(Pruebalo)

Los Streams son una nueva estructura de datos replicada y persistente en RabbitMQ, que modelan un registro de solo anexión. Son buenos para distribución masiva, soportan funciones de reproducción y viaje en el tiempo, son adecuados para escenarios de alto rendimiento y para grandes registros. Almacenan sus datos en el sistema de archivos y nunca en memoria.

Si crees que los Streams o RabbitMQ podrían ser útiles para ti pero no sabes por dónde empezar, habla con nuestros expertos, siempre estamos dispuestos a ayudar. Si quieres ver las últimas características y estudios de casos del mundo de RabbitMQ, únete a nosotros en RabbitMQ Summit 2022.

The post Presentamos el soporte de transmisión en RabbitMQ appeared first on Erlang Solutions.

by Erlang Admin at March 13, 2023 19:33

March 09, 2023

Ignite Realtime Blog

Botz version 1.2.0 release

We have just released version 1.2.0 of the Botz framework for Openfire!

The Botz library adds to the already rich and extensible Openfire with the ability to create internal user bots.

In this release, a bug that prevented client sessions for bots from being created was fixed. Hat-tip to
Kris Iyer for working with us on a fix!

Download the latest version of the Botz framework from its project page!

For other release announcements and news follow us on Twitter and Mastodon.

1 post - 1 participant

Read full topic

by guus at March 09, 2023 15:46

Erlang Solutions

Creating a simple weather application with Phoenix LiveView

Introduction

In this article we will discuss our experience building an online weather application in Elixir using Phoenix LiveView. We created a real-time weather application that allows users to see the past, current, and forecast temperature and precipitation data for any UK postcode. The goals of building this app were:

  • to further familiarise ourselves with Phoenix LiveView
  • to investigate some of the libraries available for displaying graphs in LiveView

Our reason for displaying both temperature and precipitation data simultaneously was to test the capabilities of the libraries in question, as some rather complex graph configurations are required to display two lines on one y-axis (minimum and maximum temperature) and an additional line on a second, independent y-axis (precipitation).

We wrote our app’s front-end using a combination of simple HTML attributes and Phoenix LiveView’s built-in hooks to create dynamic behaviour without any JavaScript. For example, when the user inputs a postcode and submits the form, a new graph for the temperature and precipitation in the given area is instantly generated.

We investigated two libraries in order to generate our graphs: Contex and Vega-Lite. Contex is a simple library for generating charts and plots in Elixir by generating SVG output. Vega-Lite is a more general tool, a high-level language (based on Vega, a declarative language for creating interactive visualisations) with various functionality for different common graph types, where the visualisation is created as a JSON object.

Using APIs and creating an input form

Before we could work with any graph libraries, our first task was to use an open-source API to retrieve the weather data we required for the graph. We began by getting the current temperature for a fixed “default” postcode, to ensure it was working correctly. It was not difficult to find an API that suited our purposes. We soon came across Open-Meteo, which contained all the information we needed: the current temperature, the maximum and minimum temperature for a given day, the seven-day forecast, and the precipitation.

However, this API came with a limitation: it was only able to retrieve the weather for a given latitude and longitude, rather than for a given postcode. Due to this, we had to find a second open-source API, which could fetch the latitude and longitude for a given postcode. We ultimately landed on the API provided by postcodes.io for this purpose. We then followed this up with a call to Open-Meteo, so that when the application was fully set up inserting a postcode would seamlessly fetch the relevant temperature data for that location. The following code shows the function to retrieve this information and add it to the LiveView socket:

elixir
def assign_temp_and_location(socket, location) do
  {coordinates, admin_ward} = Weather.get_location(location)
  temperature = Weather.get_weather(coordinates)

  assign(socket,
    temperature: temperature,
    admin_ward: admin_ward
  )
end

Having achieved this, our next task was simple: creating an input field to allow the user to specify a postcode that we would then fetch the weather data for. We were able to implement this using a simple HTML form element which, upon submission, queries the APIs and generates a new graph based on the data received.

Creating a graph with Contex

At this stage we had a barebones implementation of the current temperature through an input field. The next step of the process was expanding this from a basic plaintext temperature display to a more detailed graph containing the forecast, maximum and minimum temperature, and precipitation. We needed to make use of an external library to accomplish this, and originally found Contex, which allows for the creation and rendering of SVG charts through Elixir. This initially seemed like the right call, as the Contex charts were neat and legible, and the code needed to create them was relatively simple:

elixir
defp assign_chart_svg(%{assigns: %{chart: chart, admin_ward: admin_ward}} = socket) do
  assign(socket,
    :chart_svg,
    Contex.Plot.new(700, 400, chart)
    |> Contex.Plot.titles("Daily maximum and minimum temperature in #{admin_ward}", "")
    |> Contex.Plot.axis_labels("Date", "Temperature")
    |> Contex.Plot.plot_options(%{legend_setting: :legend_right})
    |> Contex.Plot.to_svg()
  )
end

However, problems soon arose with attempting to use this library for our purposes. Firstly, Contex would not allow for the setting of custom intervals on a line graph, meaning when trying to include both the seven-day history and seven-day forecast the x-axis would be abbreviated, with an interval every two days instead of every day. Secondly, our desire to make use of multiple y-axes to display both the maximum and minimum temperature and the precipitation simultaneously was not possible.

The graph generated by Contex

Exacerbating this problem was Contex’s documentation, which was very limited, particularly for line charts, a relatively recent addition to the library. This meant that it was difficult to ascertain whether there were solutions to our problems. Unable to achieve what we set out for with Contex, we opted to investigate different libraries.

From Contex to Vega-Lite

We were recommended to look at Vega-Lite, which has very thorough documentation for the JSON specification. By combining this with the documentation for Vega-Lite Elixir bindings, we had the possibility to generate graphs with much greater functionality than Contex had provided.

Vega-Lite is very powerful, and using it allowed us to easily display both the seven-day history and the seven-day forecast data received from the API. We were also able to show the precipitation in addition to the temperature data, each with its own independent y-axis. We could also modify the colours of the lines, add axis labels and modify their angles for optimum visual appeal. We were also able to add a vertical line in the middle of the graph, indicating the data points for the current date.

|The graph generated by Contex

It’s worth noting however, that when using the Vega-Lite Elixir bindings, all of the options have been normalised to snake-case atom keys. For example, in axis (for colouring the axes labels), the field is title_colour:, rather than `titleColor:` as given in the JSON specification. This caused us some brief trouble when at first we used the camel-case version, and the options were not displaying. The following is a partial excerpt of the code used for the graph:

elixir
new_chart = Vl.new(width: 700, height: 400, resolve: [scale: [y: "independent"]])

chart =
  Vl.data_from_values(new_chart, dataset)
  |> Vl.layers([
    Vl.new()
    |> Vl.layers([
      Vl.new()
      |> Vl.mark(:line, color: "#FF2D00")
      |> Vl.encode_field(:x, "date", type: :ordinal, title: "Date", axis: [label_angle: -45])
      |> Vl.encode_field(:y, "max", type: :quantitative, title: "Maximum temperature"),

In order to render the Vega-Lite graphs to a usable format (in our case SVG) we needed the VegaLite.Export functions. Unfortunately for SVG, PDF, and PNG exports, these functions rely on npm packages, meaning we had to add Node.js, npm, and some additional Vega and Vega-Lite dependencies to our project. Exporting the graph as an SVG was the best option as it allowed us to reuse code from the Contex implementation and display the graph in our HTML page render as we had before, but if we had wanted to avoid installing the npm packages, it would also have been possible to export the graph directly to HTML or as a JSON instead.

Conclusion

At the end of the process, we had broadly succeeded in creating what we set out to create: a Phoenix LiveView app that could display the weather and precipitation data for a week on either side of the current date for any given UK postcode, in a neat and colour-coded line graph. We came away from the process with both a better understanding of LiveView and a good idea of the strengths and weaknesses of the two libraries we utilised.

Contex provides a simpler and lightweight functionality, making it easy to create basic graphs. However, its comparatively limited library and its insufficient documentation provide obstacles to using it for more complex graphs, and as such it ultimately proved unsuitable for our purposes. Meanwhile, Vega-Lite is thoroughly documented and contains more intricate and advanced functionality, allowing us to create the application as outlined. Despite this, it does also have several drawbacks: its language-agnostic documentation occasionally made it slightly confusing to implement in Elixir, and the packages necessary to export the graphs create a significant JavaScript footprint in the application. When working on a project that could require one of these two libraries, it might help to consider these strengths and weaknesses in order to determine which would be the best fit.

References

Weather application on Github

Open-Meteo API

Postcodes API

Contex documentation

Vega-Lite documentation

Vega-Lite Elixir bindings documentation

The post Creating a simple weather application with Phoenix LiveView appeared first on Erlang Solutions.

by Rhys Davey at March 09, 2023 10:00

Gajim

Gajim 1.7.2

Gajim 1.7.2. brings many bug fixes and some useful improvements. Gajim now allows you to delete messages from your local chat history (in case of nasty spam messages). Furthermore, detection of WAV audio files has been improved and you can now click the waveform to skip to a specific timestamp within a voice message. Thank you for all your contributions!

What’s New

Many users voiced their interest in having a way to remove messages from their local chat history. This is now possible and lets you remove nasty spam messages, if moderators didn’t catch them in time.

  • Click the audio waveform to skip to specific timestamps within voice messages
  • Gajim’s Windows installer is now available in Polish, and it also looks better on High-DPI screens
  • When creating a new group chat, Gajim now automatically selects it
  • An issue with settings migration has been fixed
  • Infinite file size limit for file transfers is now recognized correctly
  • Nickname highlight in group chats has been improved

Have a look at the changelog for a complete list.

Gajim

As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab.

March 09, 2023 00:00

March 07, 2023

Isode

M-Guard 1.4 New Capabilities

M-Guard 1.4 is a platform support update release for M-Guard Console and M-Guard Appliance. M-Guard Appliance has been updated to use UEFI instead of BIOS for key system services.

Platform Support

The M-Guard Appliance now supports running on Netgate 6100 and 6100 MAX appliance systems.

M-Guard Appliance on Hyper-V now uses Generation 2 virtual machines.

M-Guard Appliance on VirtualBox now uses EFI.

Use of BIOS for booting is deprecated in favor of UEFI.

Base Operation System Upgraded 

The M-Guard Appliance operating system is now powered by FreeBSD 13.1.

Notice

Upgrading earlier installations requires special steps.  Contact Isode support for assistance.

by admin at March 07, 2023 12:15

Erlang Solutions

Se explican las colas de Quorum de RabbitMQ: lo que necesita saber.

Este tipo de cola es importante cuando RabbitMQ se usa en una instalación de clúster. Descubre más en este blog.


Introducción a las Colas de Quorum

En RabbitMQ 3.8.0, una de las nuevas características más significativas fue la introducción de las Colas de Quorum. La Cola de Quorum es un nuevo tipo de cola que se espera que reemplace la cola por defecto (que ahora se llama classic) en el futuro, para algunos casos de uso. Este tipo de cola es importante cuando RabbitMQ se utiliza en una instalación en clúster, ya que proporciona una replicación de mensajes menos intensiva en la red mediante el protocolo Raft.

Uso de las Colas de Quorum

Una cola clásica tiene un maestro en algún lugar de un nodo en el clúster, mientras que los espejos se ejecutan en otros nodos. Esto funciona de la misma manera para las Colas de Quorum, donde el líder, por defecto, se ejecuta en el nodo al que estaba conectada la aplicación cliente que la creó, y los seguidores se crean en el resto de los nodos del clúster.

En el pasado, la replicación de colas se especificaba mediante políticas en conjunción con las Colas Clásicas. Las colas de Quorum se crean de manera diferente, pero deberían ser compatibles con todas las aplicaciones cliente que permiten proporcionar argumentos al declarar una cola. Se debe proporcionar el argumento  x-queue-type  con el valor de quorum al crear la cola.

Por ejemplo, utilizando el cliente AMQP de Elixir1, la declaración de una Cola de Quorum es la siguiente:

Queue.declare(publisher_chan, "my-quorum-queue", durable: true, arguments: [ "x-queue-type": "quorum" ])

Una diferencia importante entre las Colas Clásicas y las de Quorum es que las Colas de Quorum solo pueden declararse duraderas, de lo contrario, se generará el siguiente mensaje de error:

:server_initiated_close, 406, "PRECONDITION_FAILED - invalid property 'non-durable' for queue 'my-quorum-queue'

Después de declarar la cola, podemos observar que es de tipo quorum en la Interfaz de Administración:

Podemos ver que una cola de Quorum tiene un líder, que sirve aproximadamente para el mismo propósito que el Maestro de la Cola Clásica. Toda la comunicación se enruta al Líder de la Cola, lo que significa que la localidad del líder de la cola tiene un efecto en la latencia y el ancho de banda de los mensajes, sin embargo, el efecto debería ser menor que en las Colas Clásicas.

El consumo de una Cola de Quorum se hace de la misma manera que otros tipos de colas.

Nuevas características de las Colas de Quorum

Las Colas de Quorum vienen con algunas características y restricciones especiales. No pueden ser no duraderas, porque el registro de Raft siempre se escribe en el disco, por lo que nunca se pueden declarar como transitorias. Tampoco admiten, a partir de la versión 3.8.2, TTL de mensajes y prioridades de mensajes2.

Dado que el caso de uso para las Colas de Quorum es la seguridad de los datos, tampoco se pueden declarar como exclusivas, lo que significaría que se eliminan tan pronto como el consumidor se desconecta.

Como todos los mensajes en las Colas de Quorum son persistentes, la opción ‘delivery-mode’ de AMQP no tiene efecto en su funcionamiento.

Consumidor Único Activo

Esto no es exclusivo de las Colas de Quorum, pero es importante mencionarlo: aunque se perdió la función de Cola Exclusiva, ganamos una nueva función que es aún mejor en muchos aspectos y que se solicitaba con frecuencia.

El Consumidor Único Activo te permite adjuntar múltiples consumidores a una cola, mientras que solo uno de ellos está activo. Esto te permite crear consumidores altamente disponibles al tiempo que te aseguras de que en cualquier momento solo uno de ellos recibe mensajes, algo que antes no era posible lograr con RabbitMQ.

Un ejemplo de cómo declarar una cola con la función de Consumidor Único Activo en Elixir:

Queue.declare(publisher_chan, "single-active-queue", durable: true, arguments: [ "x-queue-type": "quorum", "x-single-active-consumer": true ])


La cola con la configuración de Consumidor Único Activo habilitada se marca como SAC. En la imagen anterior, podemos ver que dos consumidores están adjuntos a ella (dos canales ejecutaron Basic.consume en la cola). Al publicar en la cola, solo uno de los consumidores recibirá el mensaje. Cuando ese consumidor se desconecte, el otro debería tomar la propiedad exclusiva de la secuencia de mensajes.

' Basic.get'

 o la inspección del mensaje en la Interfaz de Gestión no se puede hacer con colas de Consumidor Único Activo.

Haciendo un seguimiento de los reintentos, los mensajes envenenados

Llevar un recuento de cuántas veces se rechazó un mensaje es una de las funciones más solicitadas para RabbitMQ, y finalmente ha llegado con las Colas de Quorum. Esto te permite manejar los llamados mensajes envenenados de manera más efectiva que antes, ya que las implementaciones anteriores a menudo sufrían por la incapacidad de renunciar a los reintentos en caso de que un mensaje se quedara atascado o tenían que llevar un registro de cuántas veces se entregó un mensaje en una base de datos externa.

NOTA: Para las Colas de Quorum, es mejor práctica tener siempre algún límite en el número de veces que se puede rechazar un mensaje. Dejar que este recuento de rechazos de mensajes crezca para siempre puede llevar a un comportamiento erróneo de la cola debido a la implementación Raft.

Cuando se usan las Colas Clásicas y se vuelve a encolar un mensaje por cualquier motivo, con la marca 'redelivered' establecida, lo que esta marca significa esencialmente es ‘el mensaje puede haberse procesado ya’. Esto te ayuda a verificar si el mensaje es un duplicado o no. La misma marca existe, pero se amplió con la cabecera  'x-delivery-count', que lleva un registro de cuántas veces se ha vuelto a encolar.

Podemos observar esta cabecera en la Interfaz de Gestión:

Como podemos ver, la marca 'redelivered' está establecida y la cabecera 'x-delivery-count' es 2.

Ahora tu aplicación está mejor equipada para decidir cuándo renunciar a los reintentos.

Si eso no es suficiente, ahora puedes definir las reglas basadas en el recuento de entregas para enviar el mensaje a un intercambio diferente en lugar de volver a encolarlo. Esto se puede hacer directamente desde RabbitMQ, tu aplicación no tiene que saber acerca de la reintentación. ¡Permíteme ilustrarlo con un ejemplo!

Ejemplo: ¡Re-enrutamiento de mensajes rechazados! Nuestro caso de uso es que recibimos mensajes que necesitamos procesar, de una aplicación que, sin embargo, puede enviarnos mensajes que no se pueden procesar. La razón podría ser porque los mensajes están mal formados, o porque la propia aplicación no puede procesarlos por alguna razón u otra, pero no tenemos una forma de notificar a la aplicación emisora de estos errores. Estos errores son comunes cuando RabbitMQ sirve como bus de mensajes en el sistema y la aplicación emisora no está bajo el control del equipo de la aplicación receptora.

Luego declaramos una cola para los mensajes que no pudimos procesar:

Y también declaramos un intercambio de fanout, que usaremos como intercambio de cola muerta:

Y unimos la cola de unprocessable-messages a ella.

Creamos la cola de aplicaciones llamada my-app-queue y la política correspondiente:

Podemos usar asic.reject o Basic.nack para rechazar el mensaje, debemos usar la propiedad requeue establecida en verdadero.

Aquí hay un ejemplo simplificado en Elixir:

def get_delivery_count(headers) do case headers do :undefined -> 0 headers -> { _ , _, delivery_cnt } = List.keyfind(headers, "x-delivery-count", 0, {:_, :_, 0} ) delivery_cnt end end receive do {:basic_deliver, msg, %{ delivery_tag: tag, headers: headers} = meta } -> delivery_count = get_delivery_count(headers) Logger.info("Received message: '#{msg}' delivered: #{delivery_count} times") case msg do "reject me" -> Logger.info("Rejected message") :ok = Basic.reject(consumer_chan, tag) _ -> \ Logger.info("Acked message") :ok = Basic.ack(consumer_chan, tag) end end

Primero publicamos el mensaje, “este es un buen mensaje”:

13:10:15.717 [info] Received message: 'this is a good message' delivered: 0 times 13:10:15.717 [info] Acked message

Luego publicamos un mensaje que rechazamos:

13:10:20.423 [info] Received message: 'reject me' delivered: 0 times 13:10:20.423 [info] Rejected message 13:10:20.447 [info] Received message: 'reject me' delivered: 1 times 13:10:20.447 [info] Rejected message 13:10:20.470 [info] Received message: 'reject me' delivered: 2 times 13:10:20.470 [info] Rejected message

Y después de ser entregado tres veces, se enruta a la cola de unprocessed-messages.

Podemos ver en la Interfaz de gestión que el mensaje se enruta a la cola:

Controlando los miembros del quórum

Las colas de quórum no cambian automáticamente el grupo de seguidores / líderes. Esto significa que agregar un nuevo nodo al clúster no garantizará automáticamente que el nuevo nodo se esté utilizando para alojar colas de quórum. Las colas clásicas en versiones anteriores manejaban la adición de colas en nuevos nodos de clúster a través de la interfaz de políticas, sin embargo, esto podría plantear problemas a medida que se escalaban o reducían los tamaños de clúster. Una nueva característica importante en la serie 3.8.x para colas de quórum y colas clásicas, son las operaciones de reequilibrio de maestros de cola integradas. Anteriormente, esto solo era posible mediante scripts y complementos externos.

Agregar un nuevo miembro al quórum se puede lograr usando el comando grow:

rabbitmq-queues grow rabbit@$NEW_HOST all

Eliminar un host obsoleto, por ejemplo, eliminado, de los miembros se puede hacer a través del comando shrink:

rabbitmq-queues shrink rabbit@$OLD_HOST

También podemos reequilibrar los maestros de la cola para que la carga sea equitativa en los nodos:

rabbitmq-queues rebalance all

Lo cual (en bash) mostrará una tabla agradable con estadísticas sobre el número de maestros en los nodos. En Windows, use la bandera --formatter json para obtener una salida legible.

Resumen

RabbitMQ 3.8.x viene con muchas características nuevas. Las Colas de Quórum son solo una de ellas. Proporcionan una implementación nueva y más comprensible, en algunos casos menos intensiva en recursos, para lograr colas replicadas y alta disponibilidad. Están construidos sobre Raft y admiten características diferentes a las Colas Clásicas, que fundamentalmente se basan en el protocolo de multidifusión garantizada personalizado3 (una variante de Paxos). Como este tipo y clase de colas todavía son bastante nuevos, solo el tiempo dirá si se convierten en el tipo de cola más utilizado y preferido para la mayoría de las instalaciones distribuidas de RabbitMQ en comparación con sus contrapartes, las Colas Espejadas Clásicas. Hasta entonces, use ambos según lo mejor se adapte a sus necesidades de Rabbit. 🙂

¿Necesitas ayuda con tu RabbitMQ?

Nuestro equipo líder mundial en RabbitMQ ofrece una variedad de opciones para satisfacer sus necesidades. Tenemos todo, desde chequeos de salud hasta soporte y monitoreo, para ayudarlo a garantizar un sistema RabbitMQ eficiente y confiable.

O, si desea tener una visibilidad completa de su sistema RabbitMQ desde un panel fácil de leer, ¿por qué no aprovechar nuestra prueba gratuita de WombatOAM?”

The post Se explican las colas de Quorum de RabbitMQ: lo que necesita saber. appeared first on Erlang Solutions.

by Erlang Admin at March 07, 2023 11:18

March 05, 2023

Ignite Realtime Blog

HTTP File Upload v1.2.2 released!

We’ve just released version 1.2.2 of the HTTP File Upload plugin for Openfire. This release includes Ukrainian language support, thanks to Yurii Savchuk (svais) and his son Vladislav Savchuk (Bruhmozavr), as well as a few updated translations for Portuguese, Russian and English.

Grab it from the plugins page in your Openfire Admin Console, or download manually from the HTTP File Upload archive page, here.

For other release announcements and news follow us on Twitter and Mastodon.

1 post - 1 participant

Read full topic

by danc at March 05, 2023 19:04

The XMPP Standards Foundation

The XMPP Newsletter February 2023

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of February 2023. Many thanks to all our readers and all contributors!

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

XSF Announcements

xmpp.org got a new software section! Looking for XMPP software, i.e. clients, servers, libraries, components, and tools? Check out xmpp.org’s new software section, which lets you filter software by your own criteria. Looking for a client which works on Android and supports audio/video calls? Looking for a library that supports XEP-0461: Message Replies? Just apply the filter and see what you get!

xmpp.org&rsquo;s new software section

xmpp.org’s new software section

XMPP and Google Summer of Code 2023

The XSF has been accepted again as hosting organisation at the GSoC 2023 !

XSF and Google Summer of Code 2023

XSF and Google Summer of Code 2023

XSF fiscal hosting projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects:

XMPP Events

XMPP Videos

  • Cheogram offers a new PeerTube channel for videos about new features in their XMPP client and gateways. Also available on YouTube.

Articles

Software news

Clients and applications

  • Converse 10.1.1 and 10.1.2 have been released, which both fix some bugs. Converse is a web based XMPP/Jabber chat client.
  • Dino 0.4.0 ‘Ilulissat’ and 0.4.1 have been released. The 0.4 release adds support for message reactions and replies. Dino also switched from GTK3 to GTK4 and makes use of libadwaita now.
Dino 0.4 now supports Message Replies and Message Reactions

Dino 0.4 now supports Message Replies and Message Reactions

  • Gajim 1.7.0 and 1.7.1 have been released. These releases bring improved KeepassXC integration, better defaults for group chats created with ejabberd, and some important bug fixes.
  • Psi+ 1.5.1645 and 1.5.1646 have been released.

Servers

Libraries & Tools

  • nbxmpp 4.2.0 has been released, which brings support for XEP-0461 Message Replies and a profile for SASLprep.

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEPs proposed this month.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • No XEPs updated this month.

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call help improving the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable

  • No XEP moved to stable this month.

Deprecated

  • No XEP deprecated this month.

Call for Experience

A Call For Experience - like a Last Call, is an explicit call for comments, but in this case it’s mostly directed at people who’ve implemented, and ideally deployed, the specification. The Council then votes to move it to Final.

  • No Call for Experience this month.

Spread the news!

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, emus, Licaon_Kter, Ludovic Bocquet, MattJ, MSavoritias (fae,ve), wurstsalat, Zash
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Benoît Sibaud, Pierre Jarillon, Ppjet6, Ysabeau
  • German: xmpp.org and anoxinon.de
    • Translators: Jeybe, wh0nix
  • Italian: notes.nicfab.eu
    • Translators: nicfab
  • Spanish: xmpp.org
    • Translators: daimonduff, TheCoffeMaker

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

March 05, 2023 00:00

March 02, 2023

Ignite Realtime Blog

Translations everywhere!

Two months ago, we started using Transifex as a platform that can be easily used by anyone to provide translations for our projects, like Openfire and Spark.

It is great to see that new translations are pouring in! In the last few months, more than 20,000 translated words have been provided by our community!

We’ve enabled the Transifex platform for most of the Openfire plugins (that require translations) today. If you are proficient in a non-English language, please join the translation effort!

1 post - 1 participant

Read full topic

by guus at March 02, 2023 13:46

Erlang Solutions

Getting started with RabbitMQ: A beginner’s guide for your business

RabbitMQ is one of the world’s most popular open-source message brokers. With its tens of thousands of users (and growing), its lightweight and easy-to-deploy nature makes it a worldwide success across small startups and large enterprises across the globe.

But how do you know if it’s best for your business?  

Read on and get the rundown on the reliable messaging software that delivers every time.

So, what exactly is RabbitMQ?

RabbitMQ is an open-source message broker software that implements the Advanced Message Queuing Protocol (AMQP). It is used to facilitate communication between applications or microservices, by allowing them to send and receive messages in a reliable and scalable way.

Simply put, RabbitMQ acts as a mediator between applications that need to exchange messages. It acts as a message queue, where producers can send messages, and then consumers can receive and process them. It ensures that messages are delivered in order, without loss, and provides features such as routing, failover, and message persistence.

RabbitMQ is a highly powerful tool for building complex, scalable, and reliable communication systems between applications.

What is a Message Broker? 

A message broker is an intermediary component that sits between applications and helps them communicate with each other. 

Basic set-up of a message queue: CloudAMP

In short, applications send messages to the broker. The broker then sends the message to the intended receiver. This separates sending and receiving applications, allowing them to scale independently. 

The message broker also acts as a buffer between sending and receiving applications. It ensures that messages are delivered in the most timely and efficient manner possible. 

In RabbitMQ, messages that are stored in queues and applications can also post and consume messages from them, too. It supports multiple messaging models including point-to-point, publish/subscribe, and request/reply, making it a flexible solution for many use cases. 

By using RabbitMQ as a message broker, developers can decouple the components of their system, allowing them to build more resilient, scalable, and resilient applications. 

So why should I choose RabbitMQ? 

We’ve already touched on this slightly but, there are several reasons why RabbitMQ is a popular choice for implementing message-based systems for your business: 

It’s scalable: RabbitMQ can handle large amounts of messages and can be easily scaled up. 

It’s flexible: RabbitMQ supports multiple messaging models, including point-to-point, publish/subscribe and request/reply. 

It’s reliable: RabbitMQ provides many features to ensure reliable message delivery, including message confirmation, message persistence, and auto-recovery. 

Its Interoperability: RabbitMQ implements the AMQP standard, making it interoperable with multiple platforms and languages. 

To learn more about RabbitMQ’s impressive problem-solving capabilities, you can delve into our technical deep dive detailing its delivery.

What are the benefits of using RabbitMQ for my business?

RabbitMQ’s popularity because of its range of benefits, including:

Decoupled architecture: RabbitMQ allows applications to communicate with each other through a centralised message queue, decoupling- sending and receiving applications. This allows for a flexible and extensible architecture, in which components can scale independently. 

Performance improvement: RabbitMQ can handle large volumes of messages. It also has low latency, which improves overall system performance. 

Reliable messaging: RabbitMQ provides many features to ensure reliable messaging, including message confirmation, message retention, and auto-recovery. 

Flexible Messaging Model: RabbitMQ supports a variety of messaging models, including point-to-point, publish/subscribe, and request/reply, enabling a flexible and adaptable messaging system response. 

Interoperability: RabbitMQ implements the AMQP standard, making it interoperable with multiple platforms and languages. 

But don’t just take our word for it. 

Erlang’s world- leading RabbitMQ experts have been trusted with implementing RabbitMQ for some of the world’s biggest brands. 

You can read more about their experience and the success of RabbitMQ in their business.

When should I start to consider using RabbitMQ?

Wondering when the right time is to start implementing RabbitMQ as your messaging system? If you’re ready for reliable, scalable, and flexible communication between your applications, it might be time to consider. 

Here are some common use cases for RabbitMQ: 

Decoupled Architecture: RabbitMQ allows you to build a decoupled architecture, in which different components of your system can communicate together- without the need for a tight coupling. This makes your system more flexible, extensible and resilient. 

Asynchronous communication: When you need to implement asynchronous communication between applications, RabbitMQ can help. For example, do you have a system that needs to process large amounts of data? RabbitMQ can be used to offload that processing to a separate component, allowing the parent component to continue processing requests, meanwhile, the data is processed in the background. 

Microservices: RabbitMQ is well-suited to a microservices architecture, where different components of your system are implemented as separate services. It provides a communication infrastructure, allowing these services to communicate with each other.

Integrating with legacy systems: Do you have legacy systems that need to communicate with each other? RabbitMQ can provide a common messaging infrastructure that allows those systems to exchange messages. 

High Availability and Reliability: RabbitMQ provides features such as message persistence, automatic failover, and replication, making it a reliable solution for mission-critical applications. 

Multi-Protocol Support: RabbitMQ supports multiple messaging protocols, including AMQP, MQTT, and STOMP, making it a flexible solution for different types of applications. 

Ultimately, the choice is yours to use RabbitMQ or any other messaging system, as it all comes down to your specific business needs.

I would like to get started with RabbitMQ!

Whether you are building a small application or a large-scale system, RabbitMQ is a great solution to enable inter-component communication. 

We appreciate that you might have further questions, and our team of expert consultants are on hand and ready to talk you through the process. Just head to our contact page.

The post Getting started with RabbitMQ: A beginner’s guide for your business appeared first on Erlang Solutions.

by Cara May-Cole at March 02, 2023 10:28

March 01, 2023

JMP

Cheogram Android: Stickers

One feature people ask about from time to time is stickers.  Now, “stickers” isn’t really a feature, nor is it even universally agreed what it means, but we’ve been working on some improvements to Cheogram Android (and the Cheogram service) to make some sticker workflows better, released today in 2.12.1-3.  This post will mostly talk about those changes and the technical implications; if you just want to see a demo of some UI you may want to skip to the video demo.

Many Android users already have pretty good support for inserting stickers (or GIFs) into Cheogram Android via their keyboard.  However, as the app existed at the time, this would result in the sender re-uploading and the recipient re-downloading the sticker image every time, and fill up the sending server and receiving device with many copies of the same image.  The first step to mitigating this was to switch local media storage in the app to content-addressed, which in this case means that the file is named after the hash of its contents.  This prevents filling up the device when receiving the same image many times.

Now that we know the hashes of our stored media, we can use SIMS to transmit this hash when sending.  If the app sees an image that it already has, it can display it without downloading at all, saving not only space but bandwidth and time as well.  The Cheogram service also uses SIMS to transmit hashes of incoming MMS images for this purpose as well.

An existing Jabber client which uses the word “stickers” is Movim.  It wouldn’t make sense to add the word to our UI without supporting what they already have.  So we added support for XHTML-IM including Bits of Binary images.  This also relies on hash-based storage or caching, which by now we had.  This tech will also be useful in the future to extend beyond stickers into custom emoji.

Some stickers are animated, and users want to be able to send GIFs as well, so the app was updated to support inline playback of animated images (both GIF and WebP format).

Some users don’t have any sticker support in their keyboard or OS, so we want to provide some tools for these users as well.  We have added the option to download some default sticker packs (mostly curated from the default set from Movim for now) so that users start with some options.  We also built a small proxy to allow easily importing stickers intended for signal by clicking the regular “add to signal” links on eg signalstickers.com.  Any sticker selected from these will get sent without even uploading, saving time and space on the server, and then will be received by any user of the app who has the default packs installed with no need for downloading, with fallbacks for other clients and situations of course.

If a user receives a sticker that they’d like to save for easily sending out again later, they can long-press any image they receive and choose “Save as sticker” which will prompt them to choose or create a sticker pack to keep it in, then save it there.  Pointing a sticker sheet app or keyboard at this directory also allows re-using other sticker selection UIs with custom stickers saved in this way.

Taken together we hope these features produce real benefits for users of stickers, both with and without existing keyboard support, and also provide foundational work that we can build upon to provide custom emoji, thumbnails before downloading, URL previews, and other rich media features in the future.  If you’d like to see some of these features in action, check out this short video.

by Stephen Paul Weber at March 01, 2023 17:55

Debian XMPP Team

XMPP What's new in Debian 12 bookworm

On Tue 13 July 2021 there was a blog post of new XMPP related software releases which have been uploaded to Debian 11 (bullseye). Today, we will inform you about updates for the upcoming Debian release bookworm.

A lot of new releases have been provided by the upstream projects. There were lot of changes to the XMPP clients like Dino, Gajim, Profanity, Poezio and others. Also the XMPP servers have been enhanced.

Unfortunately, we can not provide a list of all the changes which have been done, but will try to highlight some of the changes and new features.

BTW, feel free to join the Debian User Support on Jabber at xmpp:debian@conference.debian.org?join.

You can find a list of 58 packages of the Debian XMPP team on the XMPP QA Page.

  • Dino, modern XMPP client has been upgraded from 0.2.0 to 0.4.0. The new version supports encrypted calls and group calls and reactions give you a way to respond to a message with an emoji. You can find more information about Dino 0.3.0 and Dino 0.4.0 in the release notes of the upstream project. Dino is using GTK4 / libadwaita which provides widgets for mobile-friendly UIs. Changes has been done on the main view of Dino.
  • Gajim, a GTK+-based Jabber client has been upgraded from 1.3.1 to 1.7.1. Since 1.4 Gajim has got a new UI, which supports spaces. 1.5.2 supports a content viewer for PEP nodes. 1.6.0 is using libsoup3 and python 3.10. Audio preview looks a lot nicer with a wave graph visualization and profile images (avatar) are not limited to only JPG anymore. The plugins gajim-appindicatorintegration, gajim-plugininstaller, gajim-syntaxhighlight und gajim-urlimagepreview are obsolete, these features has been moved to gajim. There were a lot of releases in Gajim. You can find the full story at https://gajim.org/post/
  • Profanity, the console based XMPP client has been upgraded from 0.10.0 to 0.13.1. Profanity supports XEP-0377 Spam Reporting, and XEP-0157 server contact information discovery. It now marks a window with an attention flag, updated HTTP Upload XEP-0363, and messages can be composed with an external editor. It also features easy quoting, in-band account registration (XEP-0077), Print OMEMO verification QR code, and many more.
  • Kaidan, a simple and user-friendly Jabber/XMPP client based on Qt has been updated from 0.7.0 to 0.8.0. The new release supports XEP-0085: Chat State Notifications and XEP-0313: Message Archive Management.
  • Poezio, a console-based XMPP client as been updated from 0.13.1 to 0.14. Poezio is now under GPLv3+. The new release supports request for voice and the /join command support using an XMPP URI. More information at https://lab.louiz.org/poezio/poezio/-/raw/v0.14/CHANGELOG.
  • [Swift][swift-im], back in Debian is the Swift XMPP client - a cross-platform Client written in C++. In 2015 the client was removed from testing and is back with version 5.0.

Server

  • prosody the lightweight extensible XMPP server has been upgraded from 0.11.9 to 0.12.2. Mobile and connectivity optimizations, a new module for HTTP file sharing, audio/video calling support. See the release announcement for more info. You will also find a lot of new modules which have been added to 0.12.0. The version 0.12.3 is waiting migration from unstable to testing.
  • ejabberd, extensible realtime platform (XMPP server + MQTT broker + SIP service) has been updated from Version 21.01 to 23.01. The new version supports the latest version of MIX (XEP-0369). There were also changes for SQL and MUC. See the release information for 22.10 and 23.01 for more details.

Libs

  • libstrophe, xmpp C lib has been upgraded from 0.10.1 to 0.12.2. The lib has SASL EXTERNAL support (XEP-0178), support for manual certificate verification and Stream Management support (XEP-0198).
  • python-nbxmpp 2.0.2 to 4.2.0 - used by gajim
  • qxmpp 1.3.2 to 1.4.0
  • slixmpp 1.7.0 to 1.8.3 (see https://lab.louiz.org/poezio/slixmpp/-/tags/slix-1.8.0)
  • loudmouth 1.5.3 to 1.5.4
  • libomemo-c, new in Debian with version 0.5.0 - a fork of libsignal-protocol-c

Others

  • There were some changes of the Libervia, formerly known as Salut à Toi (SaT) packages in Debian. The most visible change is, that Salut à Toi has been renamed to libervia:
  • salutatoi is now libervia-backend (0.9.0)
  • sat-xmpp-primitivus is now libervia-tui
  • sat-xmpp-core is now libervia-backend
  • sat-xmpp-jp is now libervia-cli
  • sat-pubsub is now libervia-pubsub (0.4.0)
  • gsasl has been updated from 1.10.0 to 2.2.0
  • libxeddsa 2.0.0 is new in Debian - toolkit around Curve25519 and Ed25519 key pairs

Happy chatting - keep in touch with your family and friends via Jabber / XMPP - XMPP is an open standard of the Internet Engineering Task Force (IETF) for instant messaging.

by Debian XMPP Team at March 01, 2023 00:00

February 27, 2023

The XMPP Standards Foundation

XMPP at Google Summer of Code 2023

XSF and Google Summer of Code 2023

XSF and Google Summer of Code 2023

The XSF has been accepted again as hosting organisation at the Google Summer of Code 2023!

Now both students and open-source newcomers are invited to consider and review a participation and prepare for the application phase. We would like to invite you to review XMPP projects that signed up with their ideas for this year.

XMPP Projects at Google Summer of Code 2023

Projects which signed up are:

  • Dino: Ideas by the Dino team
  • Monal: Ideas by the Monal team
  • Moxxy: Ideas by the Moxxy team
  • If you are capable and have the necessary skills in general you can propose you own topic. Please bear in mind that this requires extended efforts.

Designated Web Page

We have further details and advertisement material on our designated web page presented in various languages.

Checkout our media channels!

Looking forward

–The XSF Organisation Admin

February 27, 2023 00:00

February 24, 2023

Ignite Realtime Blog

inVerse Openfire plugin 10.1.2-1 released!

Earlier today, version 10.1.2 release 1 of the Openfire inVerse plugin was released. This plugin allows you to easily deploy the third-party Converse client in Openfire. In this release, the version of the client that is bundled in the plugin is updated to 10.1.2!

The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page.

For other release announcements and news follow us on Twitter

1 post - 1 participant

Read full topic

by guus at February 24, 2023 21:11

February 23, 2023

Ignite Realtime Blog

New: Openfire MUC Real-Time Block List plugin!

A new plugin has been made available for Openfire, our cross-platform real-time collaboration server based on the XMPP protocol. We have named this new plugin the MUC Real-Time Block List plugin.

This plugin can help you moderate your chat rooms, especially when your service is part of a larger network of federated XMPP domains. From experience, the XMPP community has learned that bad actors tend to spam a wide range of public chat rooms on an equally wide range of different domains. Prior to the functionality provided by this plugin, the administrator of each MUC service had to manually adjust permissions, to keep unwanted entities out. With this new plugin, that process is automated.

This plugin can be used to subscribe to a Publish/Subscribe node (as defined in XEP-0060), that can live on a remote XMPP domain, but curated by a trusted (group of) administrators). It is expected that this node contains a list of banned entities. When Openfire, through the plugin, is notified that the list has received a new banned entity, it will prevent that entity from joining a chat room in Openfire (if they’re already in, they will be kicked out automatically). Using this mechanism, moderation efforts centralized in one federated Pub/Sub service can be used by any server that uses this plugin.

This plugin is heavily inspired, and aspires to be compatible with, Prosody’s mod_muc_rtbl and the pub/sub services that it uses.

The first version of this plugin is now available on our website and should become available in the list of installable plugins in your instance of Openfire in the next few hours. Please give it a test! We are interested in hearing back from you!

For other release announcements and news follow us on Twitter

1 post - 1 participant

Read full topic

by guus at February 23, 2023 20:30

Erlang Solutions

Can’t Live `with` It, Can’t Live `with`out It

I’d like to share some thoughts about Elixir’s with keyword.  with is a wonderful tool, but in my experience it is a bit overused.  To use it best, we must understand how it behaves in all cases.  So, let’s briefly cover the basics, starting with pipes in Elixir.

Pipes are a wonderful abstraction

But like all tools, you should think about when it is best used…

Pipes are at their best when you expect your functions to accept and return basic values. But often we don’t have only simple values because we need to deal with error cases. For example:

region 
|> Module.fetch_companies() 
|> Module.fetch_departments() 
|> Enum.map(& &1.employee_count) 
|> calculate_average()

If our fetch_* methods return list values there isn’t a problem. But often we fetch data from an external source, which means we introduce the possibility of an error. Generally in Elixir this means {:ok, _} tuples for success and {:error, _} tuples for failure. Using pipes that might become:

region
|> Module.fetch_companies()
|> case do
  {:ok, companies} -> Module.fetch_departments(companies)
  {:error, _} = error -> error
end
|> case do
  {:ok, departments} ->
    departments
    |> Enum.map(& &1.employee_count)
    |> calculate_average()
  {:error, _} = error -> error
end

Not horrible, but certainly not beautiful. Fortunately, Elixir has with!

`with` is a wonderful abstraction

But like all tools, you should think about when it’s best used…

with is at it’s best when dealing with the happy paths of a set of calls which all return similar things. What do I mean by that? Let’s look at what this code might look like using with?

with {:ok, companies} <- Module.fetch_companies(region),
     {:ok, departments} <- Module.fetch_departments(companies) do
  departments
  |> Enum.map(& &1.employee_count)
  |> calculate_average()
end

That’s definitely better!

  • We separated out the parts of our code which might fail (remember that failure is a sign of a side-effect and in functional programming we want to isolate side-effects).
  • The body is only the things that we don’t expect to fail.
  • We don’t need to explicitly deal with the {:error, _} cases (in this case with will return any clause values which don’t match the pattern before <-).

But this is a great example of a happy path where the set of calls all return similar things. But where are some examples of where we might go wrong with with?

Non-standard failure

What if Module.fetch_companies returns {:error, _} but `Module.fetch_departments` returns just :error? That means your with is going to return two different error results. If your with is the end of your function call then that complexity is now the caller’s responsibility. You might not think that’s a big deal because we can do this:

else
  :error -> {:error, "Error fetching departments"}

But this breaks to more-or-less important degrees because:

  • … once you add an else clause, you need to take care of every non-happy path case (e.g. above we should match the {:error, _} returned by Module.fetch_companies which we didn’t need to explicitly match before) 😤
  • … if either function is later refactored to return another pattern (e.g. {:error, _, _}) – there will be a WithClauseError exception (again, because once you add an else the fallback behavior of non-matching <- patterns doesn’t work) 🤷‍♂️
  • … if Module.fetch_departments is later refactored to return {:error, _} – we’ll then have an unused handler 🤷‍♂️
  • … if another clause is added which also returns :error the message Error fetching departments probably won’t be the right error 🙈
  • … if you want to refactor this code later, you need to understand *everything* that the called functions might potentially return, leading to code which is hard to refactor.  If there are just two clauses and we’re just calling simple functions, that’s not as big of a deal.  But with many with clauses which call complex functions, it can become a nightmare 🙀

So the first major thing to know when using with is what happens when a clause doesn’t match it’s pattern:

  • If else is not specified then the non-matching clause is returned.
  • If else is specified then the code for the first matching else pattern is evaluated. If no else pattern matches , a WithClauseError is raised.

As Stratus3D excellently put it: “with blocks are the only Elixir construct that implicitly uses the same else clauses to handle return values from different expressions. The lack of a one-to-one correspondence between an expression in the head of the with block and the clauses that handle its return values makes it impossible to know when each else clause will be used”. There are a couple of well known solutions to address this.  One is using “tagged tuples”:

with {:fetch_companies, {:ok, companies} <- {:fetch_companies, Module.fetch_companies(region)},
     {:fetch_departments, {:ok, departments} <- {:fetch_departments, Module.fetch_departments(companies)},
  departments
  |> Enum.map(& &1.employee_count)
  |> calculate_average()
else
  {:fetch_companies, {:error, reason}} -> ...
  {:fetch_departments, :error} -> ...
end

Though tagged tuples should be avoided for various reasons:

  • They make the code a lot more verbose
  • else is now being used, so we need to match all patterns that might occur
  • We need to keep the clauses and else in sync when adding/removing/modifying clauses, leaving room for bugs.
  • Most importantly: the value in an abstraction like {:ok, _} / {:error, _} tuples is that you can handle things generically without needing to worry about the source

A generally better solution is to create functions which normalize the values matched in the patterns.  This is covered well in a note in the docs for with and I recommend checking it out.  One addition I would make: in the above case you could leave the Module.fetch_companies alone and just surround the Module.fetch_departments with a local fetch_departments to turn the :error into an {:error, reason}.

Non-standard success

We can even get unexpected results when with succeeds! To start let’s look at the parse/1 function from the excellent decimal library. It’s typespec tells us that it can return {Decimal.t(), binary()} or :error. If we want to match a decimal value without extra characters, we could have a with clause like this:

with {:ok, value} <- fetch_value(),
     {decimal, ""} <- Decimal.parse(value) do
  {:ok, decimal}

But if value is given as "1.23 " (with a space at the end), then Decimal.parse/1 will return {#Decimal<1.23>, " "}. Since that doesn’t match our pattern (string with a space vs. an empty string), the body of the with will be skipped. If we don’t have an else then instead of returning a {:ok, _} value, we return {#Decimal<1.23>, " "}.

The solution may seem simple: match on {decimal, _}! But then we match strings like “1.23a” which is what we were trying to avoid. Again, we’re likely better off defining a local parse_decimal function which returns {:ok, _} or {:error, _}.

There are other, similar, situations:

  • {:ok, %{"key" => value}} <- fetch_data(...) – the value inside of the {:ok, _} tuple may not have a "key" key.
  • [%{id: value}] <- fetch_data(...) – the list returned may have more or less than one item, or if it does only have one item it may not have the :id key
  • value when length(value) > 2 <- fetch_data(...) – the when might not match. There are two cases where this might surprise you:
    • If value is a list, the length of the list being 2 or below will return the list.
    • If value is a string, length isn’t a valid function (you’d probably want byte_size). Instead of an exception, the guard simply fails and the pattern doesn’t match.

The problem in all of these cases is that the intermediate value from fetch_data will be returned, not what the body of the with would return. This means that our with returns “uneven” results. We can handle these cases in the else, but again, once we introduce else we need to take care of all potential cases.

I might even go to the extent of recommending that you don’t define with clause patterns which are at all deep in their pattern matching unless you are very sure the success case will be able to match the whole pattern.  One example where you might take a risk is when matching %MyStruct{key: value} <- … where you know that a MyStruct value is going to be returned and you know that key is one of the keys defined for the struct. No matter the case, dialyzer is one tool to gain confidence that you will be able to match on the pattern (at least for your own code or libraries which also use dialyzer).

One of the simplest and most standard ways to avoid these issues is to make sure the functions that you are calling return {:ok, variable} or {:error, reason} tuples. Then with can fall through cleanly (definitely check out Chris Keathley’s discussion of “Avoid else in with blocks” in his post “Good and Bad Elixir”).

With all that said, I recommend using with statements whenever you can! Just make sure that you think about fallback cases that might happen. Even better: write tests to cover all of your potential cases! If you can strike a balance and use with carefully, your code can be both cleaner and more reliable.

Need help with Elixir?

We’ve helped 100’s of the world’s biggest companies achieve success with Elixir. From digital transformation, developing fit-for-purposes software for your business logic, to proof-of-concepts, right through to staff augmentation development and support. We’re here to make sure your system makes the most of Elixir to be scalable, reliable and easy to maintain. Talk to us to learn more.

Training

Want to improve your Elixir skills? Our world-leading experts are here to help. Learn from the same team who architect, manage and develop some of the biggest in-production systems available. Head to our training page to learn more about our courses and tutorials.

The post Can’t Live `with` It, Can’t Live `with`out It appeared first on Erlang Solutions.

by Brian Underwood at February 23, 2023 12:29

February 22, 2023

Profanity

New Profanity Old System

Occasionally people visit our MUC asking how to run the latest profanity release on years old systems. For some distributions people maintain a backports project, so you can get it from there if available.

Here we want to describe another methods, using containers, more specifically distrobox.

What’s Distrobox?

It’s basically a tool that let’s you run another distribution on your system. It uses docker/podman to create containers that are well integrated into your host system. This means all your Profanity config files etc will be in the usual place in ~/.config/profanity and ~/.local/share/profanity.

Be aware: Profanitys configuration files might change with new versions. Usually we transform old config files into the new format. If you however use distrobox to run the latest Profanity and then want to go back to your old version it might be that your old Profanity doesn’t understand the new or changed config options.

Setup

You need to have docker/podman installed. And the daemon should be running. Install distrobox preferably via your distribution package manager. Alternatively you can use the infamous line curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh.

In this guide we will use an openSUSE Tumbleweed maintainer, since it’s a rolling release distribution that will always have the latest Profanity available.

host$ distrobox-create --name profanity-on-tw --image opensuse/tumbleweed
Using default tag: latest
latest: Pulling from opensuse/tumbleweed
f7cda0ba8b2c: Pull complete
Digest: sha256:8d4c43253942e84737681ee8307c79be4ca9ec9011b6616d40b2ef204143ab88
Status: Downloaded newer image for opensuse/tumbleweed:latest
docker.io/opensuse/tumbleweed:latest
Creating 'profanity-on-tw' using image opensuse/tumbleweed	 [ OK ]
Distrobox 'profanity-on-tw' successfully created.
To enter, run:

distrobox enter profanity-on-tw

profanity-on-tw

We can now enter this container and install profanity in it via:

host$ distrobox-enter profanity-on-tw
Container profanity-on-tw is not running.
Starting container profanity-on-tw
run this command to follow along:

 docker logs -f profanity-on-tw

 Starting container...                  	 [ OK ]
 Installing basic packages...           	 [ OK ]
 Setting up read-only mounts...         	 [ OK ]
 Setting up read-write mounts...        	 [ OK ]
 Setting up host's sockets integration...	 [ OK ]
 Integrating host's themes, icons, fonts...	 [ OK ]
 Setting up package manager exceptions...	 [ OK ]
 Setting up rpm exceptions...           	 [ OK ]
 Setting up sudo...                     	 [ OK ]
 Setting up groups...                   	 [ OK ]
 Integrating host's themes, icons, fonts...	 [ OK ]
 Setting up package manager exceptions...	 [ OK ]
 Setting up rpm exceptions...           	 [ OK ]
 Setting up sudo...                     	 [ OK ]
 Setting up groups...                   	 [ OK ]
 Setting up users...                    	 [ OK ]
 Executing init hooks...                	 [ OK ]

Container Setup Complete!
profanity-on-tw$ sudo zypper in profanity
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 4 recommended packages were automatically selected:
  python310 python310-curses python310-dbm python310-pip

The following 16 NEW packages are going to be installed:
  libgdbm6 libgdbm_compat4 libmpdec3 libotr5 libpython3_10-1_0 libsignal-protocol-c2 libstrophe0 profanity
  profanity-mini python310 python310-base python310-curses python310-dbm python310-pip python310-setuptools
  shared-python-startup

16 new packages to install.
Overall download size: 15.7 MiB. Already cached: 0 B. After the operation, additional 60.1 MiB will be used.
Continue? [y/n/v/...? shows all options] (y): y
Retrieving: libgdbm6-1.23-1.10.x86_64 (openSUSE-Tumbleweed-Oss)                                  (1/16),  53.0 KiB    
Retrieving: libgdbm6-1.23-1.10.x86_64.rpm ........................................................[done (303.6 KiB/s)]
Retrieving: libmpdec3-2.5.1-2.12.x86_64 (openSUSE-Tumbleweed-Oss)                                (2/16),  82.8 KiB    
Retrieving: libmpdec3-2.5.1-2.12.x86_64.rpm ......................................................[done (466.2 KiB/s)]
Retrieving: libotr5-4.1.1-4.1.x86_64 (openSUSE-Tumbleweed-Oss)                                   (3/16),  60.7 KiB    
Retrieving: libotr5-4.1.1-4.1.x86_64.rpm .........................................................[done (624.3 KiB/s)]
Retrieving: libsignal-protocol-c2-2.3.3-1.15.x86_64 (openSUSE-Tumbleweed-Oss)                    (4/16), 163.7 KiB    
Retrieving: libsignal-protocol-c2-2.3.3-1.15.x86_64.rpm ..........................................[done (773.8 KiB/s)]
Retrieving: libstrophe0-0.12.2-1.3.x86_64 (openSUSE-Tumbleweed-Oss)                              (5/16),  89.5 KiB    
Retrieving: libstrophe0-0.12.2-1.3.x86_64.rpm ....................................................[done (548.6 KiB/s)]
Retrieving: shared-python-startup-0.1-6.9.noarch (openSUSE-Tumbleweed-Oss)                       (6/16),  12.9 KiB    
Retrieving: shared-python-startup-0.1-6.9.noarch.rpm ...........................................................[done]
Retrieving: libgdbm_compat4-1.23-1.10.x86_64 (openSUSE-Tumbleweed-Oss)                           (7/16),  27.7 KiB    
Retrieving: libgdbm_compat4-1.23-1.10.x86_64.rpm ...................................................[done (1.2 KiB/s)]
Retrieving: libpython3_10-1_0-3.10.9-2.2.x86_64 (openSUSE-Tumbleweed-Oss)                        (8/16),   1.3 MiB    
Retrieving: libpython3_10-1_0-3.10.9-2.2.x86_64.rpm ..............................................[done (882.1 KiB/s)]
Retrieving: python310-base-3.10.9-2.2.x86_64 (openSUSE-Tumbleweed-Oss)                           (9/16),   9.1 MiB    
Retrieving: python310-base-3.10.9-2.2.x86_64.rpm .................................................[done (975.5 KiB/s)]
Retrieving: python310-setuptools-65.6.3-1.2.noarch (openSUSE-Tumbleweed-Oss)                    (10/16),   1.3 MiB    
Retrieving: python310-setuptools-65.6.3-1.2.noarch.rpm ...........................................[done (913.8 KiB/s)]
Retrieving: python310-pip-22.3.1-1.2.noarch (openSUSE-Tumbleweed-Oss)                           (11/16),   2.5 MiB    
Retrieving: python310-pip-22.3.1-1.2.noarch.rpm ..................................................[done (958.3 KiB/s)]
Retrieving: python310-3.10.9-2.2.x86_64 (openSUSE-Tumbleweed-Oss)                               (12/16), 168.5 KiB    
Retrieving: python310-3.10.9-2.2.x86_64.rpm ......................................................[done (738.3 KiB/s)]
Retrieving: python310-dbm-3.10.9-2.2.x86_64 (openSUSE-Tumbleweed-Oss)                           (13/16), 141.2 KiB    
Retrieving: python310-dbm-3.10.9-2.2.x86_64.rpm ..................................................[done (806.6 KiB/s)]
Retrieving: python310-curses-3.10.9-2.2.x86_64 (openSUSE-Tumbleweed-Oss)                        (14/16), 171.6 KiB    
Retrieving: python310-curses-3.10.9-2.2.x86_64.rpm ...............................................[done (910.2 KiB/s)]
Retrieving: profanity-0.13.1-1.2.x86_64 (openSUSE-Tumbleweed-Oss)                               (15/16), 104.8 KiB    
Retrieving: profanity-0.13.1-1.2.x86_64.rpm ......................................................[done (744.9 KiB/s)]
Retrieving: profanity-mini-0.13.1-1.2.x86_64 (openSUSE-Tumbleweed-Oss)                          (16/16), 446.4 KiB    
Retrieving: profanity-mini-0.13.1-1.2.x86_64.rpm .................................................[done (934.5 KiB/s)]

Checking for file conflicts: ...................................................................................[done]
( 1/16) Installing: libgdbm6-1.23-1.10.x86_64 ..................................................................[done]
( 2/16) Installing: libmpdec3-2.5.1-2.12.x86_64 ................................................................[done]
( 3/16) Installing: libotr5-4.1.1-4.1.x86_64 ...................................................................[done]
( 4/16) Installing: libsignal-protocol-c2-2.3.3-1.15.x86_64 ....................................................[done]
( 5/16) Installing: libstrophe0-0.12.2-1.3.x86_64 ..............................................................[done]
( 6/16) Installing: shared-python-startup-0.1-6.9.noarch .......................................................[done]
( 7/16) Installing: libgdbm_compat4-1.23-1.10.x86_64 ...........................................................[done]
( 8/16) Installing: libpython3_10-1_0-3.10.9-2.2.x86_64 ........................................................[done]
( 9/16) Installing: python310-base-3.10.9-2.2.x86_64 ...........................................................[done]
(10/16) Installing: python310-setuptools-65.6.3-1.2.noarch .....................................................[done]
(11/16) Installing: python310-pip-22.3.1-1.2.noarch ............................................................[done]
(12/16) Installing: python310-3.10.9-2.2.x86_64 ................................................................[done]
(13/16) Installing: python310-dbm-3.10.9-2.2.x86_64 ............................................................[done]
(14/16) Installing: python310-curses-3.10.9-2.2.x86_64 .........................................................[done]
(15/16) Installing: profanity-0.13.1-1.2.x86_64 ................................................................[done]
update-alternatives: using /usr/bin/profanity-mini to provide /usr/bin/profanity (profanity) in auto mode
(16/16) Installing: profanity-mini-0.13.1-1.2.x86_64 ...........................................................[done]

Notice how the bash prompt changed from host$ to profanity-on-tw$, which is the name we gave our container. So the call to zypper happened inside that container. You can now start profanity. And on your host system you will then see the usual files in ~/.config/profanity. Type exit to get out.

Usage

Each time you want to start profanity you now have to do enter the container and start it there:

host$ distrobox-enter profanity-on-tw
profanity-on-tw$ profanity

You have however also the option to “export” profanity to your host system. Some people have a ~/bin, ~/.local/bin or another directory where they put binaries or scripts which they make available via the $PATH variable. Here we will use the ~/.local/bin folder.

profanity-on-tw$ distrobox-export --bin /usr/bin/profanity --export-path $HOME/.local/bin

Now you can call profanity even from the host system. It will be the latest version of Profanity and all its dependencies, running inside a container. With full access to your usual environment.

Uninstall

host$ distrobox-stop profanity-on-tw
host$ distrobox-rm profanity-on-tw

February 22, 2023 11:03

February 21, 2023

Prosodical Thoughts

Prosody 0.12.3 released

We are pleased to announce a new minor release from our stable branch.

This is a bugfix release for our stable 0.12 series. Most notably, it fixes a regression for SQL users introduced in 0.12.2, and a separate long-standing compatibility issue with archive stores on certain MySQL/MariaDB versions.

It also fixes an issue with websockets discovered by the Jitsi team, some issues with our internal HTTP client API, and we’ve improved the accuracy of ‘prosodyctl check dns’ in certain configurations.

A summary of changes in this release:

Fixes and improvements

  • mod_storage_sql: Don’t avoid initialization under prosodyctl (fix #1787: mod_storage_sql changes (d580e6a57cbb) breaks prosodyctl)
  • mod_storage_sql: Fix for breaking change in certain MySQL versions (#1639)
  • prosodyctl check dns: Check for Direct TLS SRV records even if not configured (#1793)

Minor changes

  • mod_websocket: Fire pre-session-close event (fixes #1800: mod_websocket: cleanly-closed sessions are hibernated by mod_smacks)
  • sessionmanager: Mark session as destroyed to prevent reentry (fixes #1781)
  • mod_admin_socket: Return error on unhandled input to prevent apparent freeze
  • configure: Fix quoting of $LUA_SUFFIX (thanks shellcheck/Zash)
  • net.http.parser: Improve handling of responses without content-length
  • net.http.parser: Fix off-by-one error in chunk parser
  • net.http.server: Add new API to get HTTP request from a connection
  • net.http.server: Fix double close of file handle in chunked mode with opportunistic writes (#1789)
  • util.prosodyctl.shell: Close state on exit to fix saving shell history
  • mod_invites: Prefer landing page over xmpp URI in shell command
  • mod_muc_mam: Add mam#extended form fields #1796 (Thanks Rain)
  • mod_muc_mam: Copy “include total” behavior from mod_mam
  • util.startup: Close state on exit to ensure GC finalizers are called

Download

As usual, download instructions for many platforms can be found on our download page

If you have any questions, comments or other issues with this release, let us know!

by The Prosody Team at February 21, 2023 10:46

February 19, 2023

JMP

SMS Account Verification

Some apps and services (but not JMP!) require an SMS verification code in order to create a new account.  (Note that this is different from using SMS for authentication; which is a bad idea since SMS can be easily intercepted, are not encrypted in transit, and are vulnerable to simple swap scams, etc.; but has different incentives and issues.)  Why do they do this, and how can it affect you as a user?

Tarpit

In the fight against service abuse and SPAM, there are no sure-fire one-size-fits-all solutions.  Often preventing abusive accounts and spammers entirely is not possible, so targets turn to other strategies, such as tarpits.  This is anything that slows down the abusive activity, thus resulting in less of it.  This is the best way to think about most account-creation verification measures.  Receiving an SMS to a unique phone number is something that is not hard for most customers creating an account.  Even a customer who does not wish to give out their phone number or does not have a phone number can (in many countries, with enough money) get a new cell phone and cell phone number fairly quickly and use that to create the account.

If a customer is expected to be able to pass this check easily, and an abuser is indistiguishable from a customer, then how can any SMS verification possibly help prevent abuse?  Well, if the abuser needs to create only one account, it cannot.  However, in many cases an abuser is trying to create tens of thousands of accounts.  Now imagine trying to buy ten thousand new cell phones at your local store every day.  It is not going to be easy.

“VoIP Numbers”

Now, JMP can easily get ten thousand new SMS-enabled numbers in a day.  So can almost any other carrier or reseller.  If there is no physical device that needs to be handed over (such as with VoIP, eSIM, and similar services), the natural tarpit is gone and all that is left is the prices and policies of the provider.  JMP has many times received requests to help with getting “10,000 numbers, only need them for one day”.  Of course, we do not serve such customers.  JMP is not here to facilitate abuse, but to help create a gateway to the phone network for human beings whose contacts are still only found there.  That doesn’t mean there are no resellers who will work with such a customer, however.

So now the targets are in a pickle if they want to keep using this strategy.  If the abuser can get ten thousand SMS-enabled numbers a day, and if it doesn’t cost too much, then it won’t work as a tarpit at all!  So many of them have chosen a sort of scorched-earth policy.  They buy and create heuristics to guess if a phone number was “too easy” to get, blocking entire resellers, entire carriers, entire countries.  These rules change daily, are different for every target, and can be quite unpredictable.  This may help when it comes to foiling the abusers, but is bad if you are a customer who just wants to create an account.  Some targets, especially “big” ones, have made the decision to lose some customers (or make their lives much more difficult) in order to slow the abusers down.

De-anonymization

Many apps and services also make money by selling your viewing time to advertisers (e.g. ads interspersed in a social media feed, as pre-/mid-roll in a video, etc.) based on your demographics and behaviour.  To do this, they need to know who you are and what your habits are so they can target the ads you see for the advertisers’ benefit.  As a result, they have an incentive to associate your activity with just one identity, and to make it difficult for you to separate your behaviour in ways that reduce their ability to get a complete picture of who you are.  Some companies might choose to use SMS verification as one of the ways they try to ensure a given person can’t get more than one account, or for associating the account (via the provided phone number) with information they can acquire from other sources, such as where you are at any given time.

Can I make a new account with JMP numbers?

The honest answer is, we cannot say.  While JMP would never work with abusers, and has pricing and incentives set up to cater to long-term users rather than those looking for something “disposable”, communicating that to every app and service out there is a big job.  Many of our customers try to help us with this job by contacting the services they are also customers of; after all, a company is more likely to listen to their own customers than a cold-call from some other company.  The Soprani.ca project has a wiki page where users keep track of what has worked for them, and what hasn’t, so everyone can remain informed of the current state (since a service may work today, but not tomorrow, then work again next week, it is important to track success over time).

Many customers use JMP as their only phone number, often ported in from their previous carrier and already associated with many online accounts.  This often works very well, but everyone’s needs are different.  Especially those creating new personas which start with a JMP number find that creating new accounts at some services for the persona can be frustrating to impossible.  It is an active area of work for us and all other small, easy-access phone network resellers.

by Stephen Paul Weber at February 19, 2023 03:49