Planet Jabber

September 25, 2023

Erlang Solutions

Our experts at Code BEAM Europe 2023

The biggest Erlang and Elixir Conference is coming to Berlin in October! 

Are you ready for a deep dive into the world of Erlang and Elixir? Mark your calendars, because Code BEAM Europe 2023 is just around the corner.

With a lineup of industry pioneers and thought leaders, Code BEAM Europe 2023 promises to be a hub of knowledge sharing, innovation, and networking. 

Erlang Solutions’ experts are working hard to create presentations and training that explore the latest trends and practical applications.

Here’s a sneak peek into what our speakers have prepared for this event:

Natalia Chechina - speaker at Code BEAM Europe 2023

Natalia Chechina:
Observability at Scale

In her talk, Natalia will share her experience and the rules of thumb for working with metrics and logs at scale. She will also cover the theory behind these concepts.

Read more >>

.

Nelson Vides and Pawel Chrzaszcz (Code BEAM Europe 2023)

.

Nelson Vides & Paweł Chrząszcz:
Reimplementing technical debt with state machines

A set of ideas on how to make a core protocol meant to be extensible implemented and stable: this talk is a must-attend for all legacy-code firefighters.

Read more >>

.

Brian Underwood
Do You Really Need Processes?

A demo of a ride-sharing application which he created to understand what is possible with a standard Phoenix + PostgreSQL application.

Read more >>

Visit Our Stand: Meet experts, win prizes

Drop by our exhibition stand at the conference venue and meet the speakers! This is a fantastic opportunity to interact one-on-one with our experts, ask questions, and gain deeper insights into their presentations.

Win a free Erlang Security Audit for your business

We’re showcasing our brand new Erlang Security Audit, for companies whose code is rooted in Erlang and who want to safeguard their systems from vulnerabilities and security threats. Pop by to find out more!

Looking for a messaging solution?

Our super-experienced team will be on hand to understand your messaging needs and demo our expert MongooseIM capabilities.  If you’re an existing MongooseIM user, you could also win a specialist Health Check to ensure the optimum and smooth operation of your MongooseIM cluster. So come by and say hello. 


Book your ticket and see you in Berlin 19-20 Oct! If you can’t make it in person, you can always join us online.

Upskill your team

Make sure to also check out the training offer. The day before Code BEAM Europe, you have the opportunity to join tutorials with experts, including those from Erlang Solutions: Robert Virding, Francesco Cesarini, Natalia Chechina and Łukasz Pauszek.

Check the training programme here >> 

The post Our experts at Code BEAM Europe 2023 appeared first on Erlang Solutions.

by Erlang Admin at September 25, 2023 16:29

September 20, 2023

Erlang Solutions

Smart Sensors with Erlang and AtomVM: Smart cities, smart houses and manufacturing monitoring

For our first article on IoT developments at Erlang Solutions, our goal is to delve into the use of Erlang on microcontrollers, highlighting and exposing its capabilities to run efficiently on smaller devices. For our inaugural article, we have chosen to address a pressing issue faced by numerous sectors- including healthcare, real estate management, travel, entertainment, and hospitality industries: air quality monitoring. The range of measurements that can be collected is vast and will vary from context to context so we decided to use just one example of the information that can be collected as a conversation starter.

We will guide you through the challenges and demonstrate how Erlang/Elixir can be utilised to measure, analyse, make smart decisions, respond accordingly and evaluate the results.

Air quality is assessed by reading a range of different metrics. Carbon dioxide (CO₂) concentration, particulate matter (PM), nitrogen dioxide (NO₂), ozone (O₃), carbon monoxide (CO) and sulfur dioxide (SO₂) are usually taken into account. This collection of metrics is currently referred to as VOC (volatile organic compounds). Some, but not all VOCs are human-made and are produced in different processes, whether by urbanization, manufacturing or during the production of other goods or services.

We are measuring CO₂ in this prototype as an example for gathering environmental readings. CO₂ is a greenhouse gas naturally present in the atmosphere and its levels are influenced by many factors, including human activities such as burning fossil fuels.

The specific technical challenge for this prototype was to run our application in very small and power-constrained scenarios. We choose to address this by trying out AtomVM as our alternative to the BEAM.

AtomVM is a new, lightweight implementation of the BEAM virtual machine that is designed to run as a standalone Unix binary or can be embedded in microcontrollers such as STM32, ESP32 and RP2040.

Unlike a single-board computer designed to run an operating system, a microcontroller is not purpose-built for a specific task. Instead it runs application-specific firmware, often with very low power consumption and with lower costs, making it ideal for operating IoT devices.

Our device is composed of an ESP32 microcontroller, a BME280 sensor to measure pressure, temperature and relative humidity and a SDC40 sensor to measure CO2 PPM (Parts per Million).

The ESP32 that we are going to use in this article is an ESP32-C3, which is a low-cost single-core RISC-V microcontroller, obtainable from authorized distributors worldwide. The SDC40 sensor is a Sensirion and the BME280 sensor is a Bosch Sensortec. There might be cheaper manufacturers for those sensors, so feel free to choose according to your needs.

Let’s get going!

Getting dependencies ready

For starters, we will need to have AtomVM installed, just follow the instructions on their website.

It is important to follow the instructions as it guarantees that you will have a working Expressif ESP-IDF installation and you are able to flash the ESP32 microcontroller via the USB port using the esptool provided by the ESP-SDK tool suite.
You will also need to have rebar3 installed as we are going to use it to manage the development cycle of the project.

Bootstrapping our application

First, we will need to create our application in order to start wiring things up on the software side. Use rebar3 for creating the application layout:

% rebar3 new app name=co2
===> Writing co2/src/co2_app.erl
===> Writing co2/src/co2_sup.erl
===> Writing co2/src/co2.app.src
===> Writing co2/rebar.config
===> Writing co2/.gitignore
===> Writing co2/LICENSE.md
===> Writing co2/README.md

Make sure to include the rebar3 plugins and dependencies before compiling the scaffold project by adding the following to your rebar.config file:

{deps, [
    {atomvm_lib, {git, "https://github.com/atomvm/atomvm_lib.git", {branch, "master"}}}
]}.

{plugins, [
    atomvm_rebar3_plugin
]}.

Recent atomvm_lib development updates have not yet been published to hex.pm, so we use the master branch which has some fixes we need. Lastly, this dependency includes the BME280 driver that we are going to use.

While we can boot the application now in our machine, we also need to implement an extra function that AtomVM will use as an entrypoint. The OTP entrypoint is defined in the co2.app.src file as {mod, {co2_app, []}}, which is the default module to use for starting an application. However, in AtomVM, we need to instruct the runtime to use a start/0 function defined within a module. That is, AtomVM does not start the same way standard OTP applications do. Therefore, some glue must be used:

-module(co2).

-export([start/0]).

start() ->
    {ok, I2CBus} = i2c_bus:start(#{sda => 6, scl => 7}), %% I2C pins for the xiao esp32c3
    {ok, SCD} = scd40:start_link(I2CBus, [{is_active, true}]),
    {ok, BME} = bme280:start(I2CBus, [{address, 16#77}]),
    loop(#{scd => SCD, bme => BME}).

loop(#{scd := SCD, bme := BME} = State) ->
    timer:sleep(5_000),
    {ok, {CO2, Temp, Hum}} = scd40:take_reading(SCD),
    {ok, {Temp1, Press, Hum1}} = bme280:take_reading(BME),
    io:format(
       "[SCD] CO2: ~p PPM, Temperature: ~p C, Humidity: ~p%RH~n",
       [CO2, Temp, Hum]
      ),
    io:format(
      "[BME] Pressure: ~p hPa, Temperature: ~p C, Humidity: ~p%RH~n",
       [Press, Temp1, Hum1] 
      ),
    loop(State).

This module will start the main loop that reads from the sensors and displays the readings over the serial connection.

We are using the stock BME280 driver that comes bundled with the atomvm_lib dependency, meaning that we only needed to change the address in which the BME280 sensor answers in the I2C bus.
For the SCD40 sensor, we need to write some code. According to the SCD40 datasheet, in order to submit commands to the sensor, we need to wrap our commands with a START and STOP condition, signaling the transmission sequence. The sensor provides a range of features and functionality, but we are only concerned with starting periodic measurements and reading those values from the sensor’s memory buffer.

%% 3.5.1 start_periodic_measurement
do_start_periodic_measurement(#state{i2c_bus = I2CBus, address = Address}) ->
    batch_writes(I2CBus, Address, ?SCD4x_CMD_START_PERIODIC_MEASUREMENT),
    timer:sleep(500),
    ok.

…

batch_writes(I2CBus, Address, Register) ->
    Writes =
	[
	 fun(I2C, _Addr) -> i2c:write_byte(I2C, Register bsr 8) end,     %% MSB
	 fun(I2C, _Addr) -> i2c:write_byte(I2C, Register band 16#FF) end %% LSB
	],
    i2c_bus:enqueue(I2CBus, Address, Writes).

Once the SCD40 starts measuring the environment periodically we can read from the sensor every time a new reading is stored in memory:

%% 3.5.2 read_measurement
read_measurement(#state{i2c_bus = I2CBus, address = Address}) ->
    write_byte(I2CBus, Address, ?SCD4x_CMD_READ_MEASUREMENT bsr 8),
    write_byte(I2CBus, Address, ?SCD4x_CMD_READ_MEASUREMENT band 16#FF),
    timer:sleep(1_000),
    case read_bytes(I2CBus, Address, 9) of
	{ok,
	 <<C:2/bytes-little, _CCRC:1/bytes-little,
	   T:2/bytes-little, _TCRC:1/bytes-little,
	   H:2/bytes-little, _HCRC:1/bytes-little>>} ->
	    %% 2 bytes in little endian for co2
	    %% 2 bytes in little endian for temp
	    %% 2 bytes in little endian for humidity
	    <<C1, C2>> = C,
	    <<T1, T2>> = T,
	    <<H1, H2>> = H,
	    {ok, {(C1 bsl 8) bor C2,
		  -45 + 175 * (((T1 bsl 8) bor T2) / math:pow(2, 16)),
		  100 * (((H1 bsl 8) bor H2) / math:pow(2, 16))}};
	{error, _Reason} = Err ->
	    Err
    end.


According to the datasheet, the response we get from memory are 9 bytes that we need to unpack and convert. The response includes an 8-bit CRC checksum that we don’t take into account, but it would be useful to validate the sensor’s response. All the conversions above are according to the official datasheet’s basic command specifications.

Flashing our application

In order to get our application packed in AVM format and flashed to the esp32, we will need to add a rebar3 plugin that handles all those steps for us. It is possible to perform these steps manually but it can become tedious and error prone. By using rebar3 again we gain access to a more streamlined development process.

Add the following to your rebar.config: 

{plugins, [
    {atomvm_rebar3_plugin, {git, "https://github.com/atomvm_rebar3_plugin", {branch, "master"}}}
]}.




Which will provide a few commands that we will use, mainly `esp32_flash` and `packbeam`. Due to the way the the plugin is implemented, calling `esp32_flash` will proceed on getting project dependencies, getting the application compiled and it’s beam files packed in an AVM format designed to be flashed on our device:

% rebar3 esp32_flash --port /dev/tty.usbmodem2101





Note: You must use a port that matches your own.

Obtaining readings

If everything goes according to plan we should be able to connect to our device and see the output of the readings over the serial port. But first, we need to issue the following command within the AtomVM/src/platforms/esp32 directory:

% ESPPORT=/dev/tty.usbmodem2101 idf.py -b 115200 monitor


Note: You must use a port that matches your own.

The output should match something along these lines:

[SCD] CO2: 848 PPM, Temperature: 2.91405487060546875000e+01 C, Humidity: 3.74832153320312500000e+01%RH
[BME] Pressure: 7.46429999999999949978e+02 hPa, Temperature: 2.89406427826446881000e+01 C, Humidity: 3.84759374625173900000e+01%RH



Closing remarks

We have explored writing an Erlang application that can run on an ESP32 microcontroller using AtomVM, an alternate implementation of the BEAM. We also managed to read environmental metrics of our interest, such as temperature, humidity and CO2 for further processing.

Our highlights include the possibility for manipulating binary data by using pattern matching and developer happiness.

The ability to run Erlang applications on microcontrollers opens up a wide range of possibilities for IoT development. Erlang is a well-known language for building reliable and scalable applications, and its use on microcontrollers can help to ensure that these applications are able to handle the demands that IoT requires.

External links

The post Smart Sensors with Erlang and AtomVM: Smart cities, smart houses and manufacturing monitoring appeared first on Erlang Solutions.

by Ricardo Lanziano at September 20, 2023 14:56

September 18, 2023

Snikket

State of Snikket 2023: Funding

As promised in our ‘State of Snikket 2023’ overview post, and teased at the end of our first update post about app development, this post in the series is about that thing most of us open-source folk love to hate… money.

We are an open-source project, and not-for-profit. Making money is not our primary goal, but like any business we have upstream expenses to pay - to compensate for the time and specialist work we need to implement the Snikket vision. To do that, we need income.

This post will cover where our funding has come from over the last couple of years and where we’ve been spending it. We’ll also talk a bit about where we anticipate finding funding over the next year or so, and what some of that is budgeted for.

Our last post on this topic was two years ago, when we announced the Open Technology Fund grant that allowed SuperBloom (then known as Simply Secure) to work on the UI/UX of the Snikket apps. Since then, other pieces of Snikket-related work have been supported by two more grants - both from projects managing funds dedicated to open source and open standards by the EU’s (Next Generation Internet) initiative.

The first one was a project called DAPSI (Data Portability and Services Incubator), focused on enabling people to move their data more easily between different online services. DAPSI funded Snikket directly to support Matthew’s work on account portability standards, which can be used not only in the software projects underlying Snikket itself, but any and all XMPP software. This one helped keep Matthew fed for much of 2021, and as we described on our blog after the funding was confirmed, it kept him busy with:

  • Standardizing the necessary protocols and formats for account data import and export
  • Developing open-source easy-to-use tools that allows people to export, import and migrate their account between XMPP services
  • Building this functionality into Snikket

The other grant was from the NGI Assure Fund administrated by NLnet. It was one Matthew applied for on behalf of the Prosody project, and helped keep him busy and fed through the second half of 2022 and into 2023. Prosody is the XMPP server project that the Snikket server software is built on, so any improvements there flow fairly directly to people using Snikket.

NGI Assure is focused on improving the security of people’s online accounts, and their grant to Prosody was for work on bringing new security features like multi-factor authentication to XMPP accounts. The work included in the scope of the grant is now complete, and some of it is already available to be used. The rest will be boxed up over the coming months and released, to start finding its way into XMPP software.

Both of these successful grant applications are practical examples of the Snikket company serving as a way to fund important work on the software and standards that the Snikket software and services depend on. Work that can be hard to fund any other way. However, grants like these usually cover a medium-to-long-term piece of work with a very specific scope, which can divert time away from other parts of the project. It is hard to find grants with a focus on general improvements, bug fixing and maintenance. This is the main reason why there hasn’t been as much work on the app side of things, nor updates on this blog.

We very much appreciate the grants we’ve received from all these funders, and the important features they have enabled us to implement. But ultimately we see “side income” like grants as a short-term way to plug the holes in our financial bucket while we’re still getting up and running. Our long term goal, as a social enterprise (specifically a UK-based Community Interest Company), has always been to earn the income we need through donations and by providing commercial services to the community using Snikket software.

When Snikket began, the main plan for this was to set up a hosting service, where people can pay a regular subscription to have us look after their Snikket server (more on this below). But over the last year or so we’ve discovered that there’s a lot to be gained from partnering with other social enterprises with shared values and related goals.

One such company is JMP.chat, an innovative telephony company who provide phone numbers that can be used with XMPP apps, for both text messages and calls. They recently celebrated JMP’s official public launch a few months ago.

We’re very grateful to JMP for funding the other half of Matthew’s work hours while he was beavering away on the NGI Assure grant work. Why were they willing to do that? To answer that, we need to tell you a bit more about what they do.

During the six years their service has been in beta testing, JMP’s first priority has been developing software gateways to allow XMPP apps to communicate with mobile phone networks, and vice-versa. However, many of their customers are newcomers to the world of XMPP. They would often struggle to find suitable apps with the required features for their platform, and struggle to find good servers on which they can register their XMPP accounts.

What could be a better solution to this problem than a project that aims to produce a set of easy-to-use XMPP-compliant apps with a consistent set of features across multiple platforms? Yes - Snikket complements their service wonderfully!

So we have been collaborating a lot with JMP (or more generally, Soprani.ca - their umbrella project for all their open-source projects, including JMP). On the app development side, we share code between Snikket Android and their Cheogram Android app (both are based on, and contribute back to, Conversations). We have also worked to ensure that iOS is not left behind, integrating features such as an in-call dial pad to Snikket iOS as well.

If JMP customers don’t already have access to a hosted XMPP server and neither the time or skills to run their own, they need one of those too. So JMP have been suggesting Snikket’s hosting service to customers who don’t have an XMPP account yet. With all the necessary features for a smooth experience, easy setup and hosting available, Snikket ticks all the boxes. In fact the latest version of Cheogram allows you to launch your own Snikket instance directly within the app!

A lot of work has been put into ensuring the hosting service is easy, scalable and reliable - to be ready for JMP’s launch traffic and also well into the future.

But while JMP is an excellent partner, Snikket isn’t only about JMP. We’re preparing for our own service to also exit beta before the end of this year. Once we do, revenue from the service will help us cover the costs of continuing to grow and advance all of our goals. Pricing has not been set yet, but we’re aiming for a balance between sustainable and affordable.

JMP will continue to sponsor half of Matthew’s time on the project. The other half is covered by our other supporters. You know who you are and we’re very grateful for your support.

The income sources we’ve talked about so far pay for Matthew’s time to work on Snikket and related projects. We also appreciate the donations a number of people have made to the project via LiberaPay and GitHub sponsorships. These help us pay for incidental expenses like;

  • Project infrastructure, including this website, domain names, and push notification services and monitoring.

  • Development costs, like paying for an Apple developer account.

  • Travel costs of getting to conferences for presentations.

One other important thing these donations help to pay for is test devices.

We buy, or are donated, second-hand devices for developing and testing the Snikket apps. Used devices are much cheaper, so we can get more test devices for the same budget. Also, most people don’t get a brand new device every year, so these slightly older devices are more likely to match what the average person is using.

Finally, we consider the environmental benefit. Using older but functional devices gives them a second life, preventing them from being needlessly scrapped, and keeping them out of the growing e-waste piles our societies now produce.

So that’s everything there is to share on the topic of Snikket’s finances for now. But we’re not done with our ‘State of Snikket 2023’ updates, oh no.

As we mentioned at the end of the last piece in this series, there’s at least one more coming, about new regulations for digital technology and online services. A number of governments around the world are passing or proposing laws that could affect Snikket - some of them a bit concerning - and we have a few things to say about them.

We’re also going to sneak in a review of the inaugural FOSSY conference Matthew presented at recently.

Watch this space!

by Snikket Team (team@snikket.org) at September 18, 2023 13:20

September 13, 2023

JMP

Newsletter: Summer in Review

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

Since our launch at the beginning of the summer, we’ve kept busy.  We saw some of you at the first FOSSY, which took place in July.  For those of you who missed it, the videos are out now.

Automatic refill for users of the data plan is in testing now.  That should be fully automated a bit later this month and will pave the way for the end of the waiting list, at least for existing JMP customers.

This summer also saw the addition of two new team members: welcome to Gnafu the Great who will be helping out with support, and Amolith, who will be helping out on the technical side.

There have also been several releases of the Cheogram Android app (latest is 2.12.8-2) with new features including:

  • Support for animated avatars
  • Show “hats” in the list of channel participants
  • An option to show related channels from the channel details area
  • Emoji and sticker autocomplete by typing ‘:’ (allows sending custom emoji)
  • Tweaks to thread UI, including no more auto-follow by default in channels
  • Optionally allow notifications for replies to your messages in channels
  • Allow selecting text and quoting the selection
  • Allow requesting voice when you are muted in a channel
  • Send link previews
  • Support for SVG images, avatars, etc.
  • Long press send button for media options
  • WebXDC importFiles and sendToChat support, allowing, for example, import and export of calendars from the calendar app
  • Fix Command UI in tablet mode
  • Manage permissions for channel participants with a dialog instead of a submenu
  • Ask if you want to moderate all recent messages by a user when banning them from a channel
  • Show a long streak of moderated messages as just one indicator

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at September 13, 2023 20:19

Erlang Solutions

Diversity & Inclusion at Code BEAM Europe 2023

Our Pledge to Diversity

As technology becomes increasingly integrated into our lives, it’s crucial that the minds behind it come from diverse backgrounds. Different viewpoints lead to more comprehensive solutions, ensuring that the tech we create addresses the needs of a global audience.

At Erlang Solutions, we believe that a diverse workforce is a catalyst for creativity and progress. By sponsoring the Diversity & Inclusion Programme for Code BEAM Europe 2023, we’re reinforcing our commitment to creating a tech landscape that is reflective of the world we live in.

This initiative is not just about breaking down barriers; it’s about opening doors to new perspectives, ideas, and endless possibilities.

At Erlang Solutions, we believe that diversity isn’t just a buzzword – it’s a fundamental pillar of progress. Our sponsorship of the Diversity & Inclusion Programme at Code BEAM aligns perfectly with our values. We’re excited to be part of an event that encourages open dialogue, showcases diverse talent, and paves the way for a more inclusive tech industry. – Jo Galt, Talent Manager.

The goal of the programme is to increase the diversity of attendees and offer support to groups underrepresented in the tech community who would not otherwise be able to attend the conference.

The Diversity & Inclusion Programme focuses primarily on empowering women, ethnic minorities or people with disabilities, among others, but everybody is welcome to apply

The post Diversity & Inclusion at Code BEAM Europe 2023 appeared first on Erlang Solutions.

by Erlang Admin at September 13, 2023 13:31

September 07, 2023

Erlang Solutions

Pay down technical debt to modernise your technology estate

Imagine this scenario. Your CEO tells you the organisation needs a complete tech overhaul, then gives you a blank cheque and free rein. He tells you to sweep away the old and usher in the new. “No shortcuts, no compromise!” he cries. “Start from scratch and make it perfect!”

And then you wake up. As we all know, this scenario is pure fantasy. Instead, IT leaders are faced with a constant struggle to keep up with the needs of the business, using limited resources, time-saving shortcuts and legacy systems.

That’s the normal way of things and it can be a recipe for serious technical debt.

What is technical debt?

You’ve probably heard the term “technical debt”, even if there’s no agreement on what exactly it means. 

At its simplest, technical debt is the price you pay for an unplanned, non-optimised IT stack. There are many reasons for technical debt but in every definition, the debt tends to build over time and become more complex to solve.

The phrase is intentionally analogous to financial debt. When you buy a house, you take out debt in the form of a mortgage. You get instant access to the thing you need – a home – but there are consequences down the line in the form of interest payments.

As the metaphor suggests, technical debt is not always bad, just as financial debt is not always bad. It can be useful to do things quickly, as long as you’re prepared to tackle the consequences when they inevitably emerge. Unfortunately, lots of organisations take on the debt without thinking about the challenges to come.

The challenges of technical debt

How does technical debt come about? The simple answer is, in the normal cut and thrust of running a busy organisation.

Source: McKinsey & Company

  • You create a temporary fix to a software problem and then don’t have time to design a better one. Over time, the temporary fix becomes a permanent part of your solution. The sticking plaster becomes the cure. 
  • Your development teams are put under pressure to get something to market in super-quick time, to grasp a time-sensitive opportunity. They get it done – brilliantly – by making something that works well for the task in hand, but time-saving shortcuts slow up other systems in the longer term.
  • Finance refuses to replace legacy systems that just about work, even if they can’t offer the flexibility or speed that a modern digital-first organisation needs.

In each case, complexity builds. One quick fix after another undermines the efficiency of your wider technology stack. Solutions work in isolation when they need to work together. Systems creak under the pressure of outdated or over complex code.
What level of debt does this ad hoc activity accumulate? According to research by consultancy Mckinsey, tech debt amounts to between 20% and 40% of the worth of an organisations’ entire technology stack. The study also found that those with the lowest technical debt performed better.

Source: Tech Target

How to manage technical debt

So what can you do about it? In short, the way to manage technical debt is to pay it off. Not necessarily all of it, because some debt is OK. But it should be nearer 5% than 40%.

The first thing is to understand what your technical debt is, and what’s causing it. Some of this you may instinctively know, such as the slowdown in productivity caused by legacy infrastructure.

Elsewhere, it can be relatively straightforward to identify the symptoms of an overly complex or outdated system:

  • When an engineer is assigned a support ticket, how long does it take to complete the task? Are average completion times increasing? If so, you’re accumulating technical debt.
  •  Are you having to fix your fixes? Maybe an application requires reworking one week and then again a couple of weeks later. In this case, debt is building up.
  • If you often have to develop applications and solutions quickly, or patch legacy software to keep it running, you’re also likely to be accumulating technical debt. 

In business-critical applications, it’s worth analysing the quality of the underlying code. It might have been fine five years ago, but half a decade can be a long time in technology. Writing modern code in a more efficient language will likely create significant efficiencies.

Don’t try and do everything

Identifying and reducing technical debt is a resource-intensive task. But it can be made manageable if you focus on evolution rather than revolution.

Mckinsey cites the example of a company that identified technical debt across 50 legacy applications but found that most of that debt was driven by fewer than 20. Every business is likely to have core assets that create most of its technical debt. They’re the ones to focus on.

When you have identified the most debt-laden solutions, put the support, funding and governance in place to pay down the debt. Create meaningful KPIs and keep to them. Think about how to avoid debt accumulating again after modernisation projects are completed.

Elixir and technical debt: commit to modern coding

At the core of that future-proofing effort is code. Legacy coding techniques and languages create clunky, inefficient applications that will soon load your organisation with technical debt.

One way to avoid that is through the use of Elixir, a simple, lightweight programming language that is built on top of the Erlang virtual machine.

Elixir helps you avoid technical debt by creating uncomplicated, effective code. The simpler the code, the less likely it is to go wrong, and the easier it is to identify faults if it does.   

In addition, Elixir-based applications always tend to run at the optimal performance for their hardware environment, making the best use of your technology stack without incurring technical debt. 

In short, Elixir is a modern language that is designed for modern technology estates. It reduces technical debt through simplicity, efficiency and easy optimisation. 

Want to know more about efficient, effective development with Elixir, and how it can reduce your technical debt? Why not drop us a line?  

The post Pay down technical debt to modernise your technology estate appeared first on Erlang Solutions.

by Cara May-Cole at September 07, 2023 16:07

September 06, 2023

Prosodical Thoughts

Prosody 0.12.4 released

We are pleased to announce a new minor release from our stable branch.

We’re relieved to announce this overdue maintenance release containing a number of bug fixes and also some improvements from the last few months.

Especially the prosodyctl check tool which gained some new diagnostic checks as well as handling of configuration option types the same way Prosody itself does.

A summary of changes in this release:

Minor changes

  • core.certmanager: Update Mozilla TLS config to version 5.7
  • util.error: Fix error on conversion of invalid error stanza #1805
  • util.array: Fix new() library function
  • util.array: Expose new() on module table
  • prosodyctl: Fix output of error messages containing ‘%’
  • util.prosodyctl.check: Correct suggested replacement for ‘disallow_s2s’
  • util.prosodyctl.check: Allow same config syntax variants as in Prosody for some options #896
  • util.prosodyctl.check: Fix error where hostname can’t be turned into A label
  • util.prosodyctl.check: Hint about the ‘external_addresses’ config option
  • util.prosodyctl.check: Suggest ‘http_cors_override’ instead of older CORS settings
  • util.prosodyctl.check: Validate format of module list options
  • mod_websocket: Add a ‘pre-session-close’ event #1800
  • mod_smacks: Fix stray watchdog closing sessions
  • mod_csi_simple: Disable revert-to-inactive timer when going to active mode
  • mod_csi_simple: Clear delayed active mode timer on disable
  • mod_admin_shell: Fix display of remote cert status when expired etc
  • mod_smacks: Replace existing watchdog when starting hibernation
  • mod_http: Fix error if ‘access_control_allow_origins’ is set
  • mod_pubsub: Send correct ‘jid’ attribute in disco#items
  • mod_http: Unhook CORS handlers only if active to fix an error #1801
  • mod_s2s: Add event where resolver for s2sout can be tweaked

Download

As usual, download instructions for many platforms can be found on our download page

If you have any questions, comments or other issues with this release, let us know!

by The Prosody Team at September 06, 2023 10:42

September 05, 2023

The XMPP Standards Foundation

The XMPP Newsletter August 2023

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of August 2023. Many thanks to all our readers and all contributors!

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

XMPP and Google Summer of Code 2023

The XSF has been accepted again as hosting organisation at the GSoC 2023 and receive two slots for XMPP Contributors!

On Dino:

On Moxxy:

XSF and Google Summer of Code 2023

XSF and Google Summer of Code 2023

XSF fiscal hosting projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects:

XMPP Events

  • XMPP Office Hours: available on our YouTube channel
  • Berlin XMPP Meetup (remote): monthly meeting of XMPP enthusiasts in Berlin, every 2nd Wednesday of the month
  • XMPP Italian happy hour: monthly Italian XMPP web meeting, starting May 16th and then every third Tuesday of the month at 7:00 PM (online event, with web meeting mode and live streaming).
  • TroLUG XMPP Workshop The TroLUG organizes the second workshop on XMPP in German language on 2023-09-07. It takes place as audio conference via BBB. All nice people are invited to join the workshop.

Videos

There has been an XMPP track at FOSSY2023 with many talks:

  • XMPP Connectivity & Security is an introduction about XMPP connectivity XEPs like XEP-0368 (Direct TLS), XEP-0467 (QUIC), XEP-0468 (WebSocket S2s) and the internals of xmpp-proxy, a forward+reverse proxy, and others.
  • XMPP Introduction and Overview is a brief history and introduction to the XMPP protocol for people with small background in programming.
  • My XMPP Past, Present, and Future is a point-of-view journey through the evolution of the XMPP ecosystem from 2004. It explains how it was affected by major events such as the decline of traditional IM services, the beginning of the smartphone era and new chat services, and more.
  • Building open standards-based ecosystems is a talk about how XMPP as both a community and a protocol adapted to change, and the role that the XSF played in its continuity, but also a general discussion about sustainability of open ecosystems and open networks.

Articles

  • No articles this month.

Software news

Clients and applications

Snikket Logo

Snikket - Chat that is simple, secure, and private

Servers

Libraries & Tools

XMPP Providers Logo

XMPP Providers - Which XMPP provider suits you best? It’s your choice.

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • MUC Token Invite
    • This specification provides a way to generate tokens to invite users to a MUC room.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable

  • No XEP moved to stable this month.

Deprecated

  • No XEP deprecated this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Jonas Stein, Licaon_Kter, Ludovic Bocquet, melvo, MSavoritias (fae,ve), nicola, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • German: xmpp.org and anoxinon.de
    • Translators: Jeybe, wh0nix
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: daimonduff, TheCoffeMaker

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

September 05, 2023 00:00

August 31, 2023

Erlang Solutions

What businesses should consider when adopting AI and machine learning

AI is everywhere. The chatter about chatbots has crossed from the technology press to the front pages of national newspapers. Worried workers in a wide range of industries are asking if AI will take their jobs.

Away from the headlines, organisations of all sizes are getting on with the task of working out what AI can do for them. It will almost certainly do something. One survey puts AI’s potential boost to the global economy at an eye watering US$15.7tr by 2030.

Those major gains will come from productivity enhancements, better data-driven decision making and enhanced product development, among other AI benefits. 

It’s clear from all this that most businesses can’t afford to ignore the AI revolution. It has the very real potential to cut costs and create better customer experiences. 

Quite simply, it can make businesses better. If you’re not at least thinking about AI right now, you should be aware that your competitors probably are.

So the question is, what AI tools and processes are the right ones for you, and how do you implement groundbreaking technology that actually works, without disrupting day-to-day workflows? Here are a few things to consider.

AI or machine learning?

The first thing is to be clear about what you mean by AI.

AI and machine learning

AI is an umbrella term for technologies that attempt to mimic human intelligence. Perhaps the most important, at least at the moment, is machine learning (ML).

ML tools analyse existing data to create business-enhancing insight. The more data they’re exposed to, the more they ‘learn’ what to look out for. They find patterns and trends in mountains of information and do so at speed.    

As far as business is concerned, those patterns might pinpoint unusual sales trends, potential production bottlenecks, or hidden productivity issues. They might reveal a hundred other potential opportunities and challenges. The important thing to remember is that ML of this kind automates the ability to learn from what has gone before.

ChatGPT and similar technologies, meanwhile, are part of a class of tools called generative AI. These applications also use machine learning techniques to mine huge datasets but do so for fundamentally different reasons.

If ML looks back at existing materials, generative AI looks forward and creates new ones. One obvious role is in creating content, but generative tools can also produce code, business simulations and product designs.

These two AI technologies can work together. For example, they might be tasked with automating the production of reports based on detailed analysis of a previous year’s data.

Work out your goals for AI and machine learning

Once you know a little about different AI tools, the next step is to understand what they can do for you. Begin with a business goal, not a technology.

Start by identifying problems you need to solve, or opportunities you want to grasp. Maybe you want to create data-driven marketing campaigns for different customer segments. Maybe your website is crying out for a series of basic ‘how-to’ animations. Maybe you have processes that are ripe for automation? 

Whatever it is, the key is to identify the challenges you face and the opportunities you can take advantage of, and then mould an AI strategy that meets real business goals.

Become a data-centric organisation

As we’ve seen, AI and ML are dependent on data, and lots of it. The more data they can use, the more accurate and useful they tend to be.

But data can be problematic. It is diverse, fragmented and often unstructured. It needs to be stored and moved securely and in line with relevant privacy regulations. All of this means that to make it valuable, you need to create a data management strategy.

That strategy needs to address challenges related to data sourcing, storage, quality, governance, integration, analysis and culture. 

Corporate data is typically spread across an organisation and often found squirrelled away in the silos of legacy technology systems. It needs to be pooled, formatted and made accessible to the AI and ML tools of different departments and business units. Data is only useful when it’s available.

AI and machine learning: Start small and keep it simple

All of this makes implementing AI and ML sound like a highly time-consuming and complex undertaking. But it needn’t be, and especially not at first.

The secret to most successful technology implementations is to start small and simple. That’s doubly true with something as potentially game-changing as AI and ML. 

For example, start applying ML tools to just a small section of your data, rather than trying to do too much too soon. Pick a specific challenge that you have, focus on it, and experiment with refining processes to achieve better results. Then increase AI use incrementally as the technology proves its worth.

Bring your team with you

Much of the recent publicity around AI has focused on doom-laden predictions of mass unemployment. When you talk about adopting AI and ML in your organisation, employee alarm bells may start ringing, which could have serious implications for staff morale and productivity.

But in most organisations, AI is about augmenting human effort, not replacing it. AI and ML can automate the mundane tasks people don’t like doing, freeing them up for more creative activity. It can provide insight that improves human decision making, but humans still make the decisions. It is far from perfect, and human oversight of AI is required at every step.

Your communications around the implementation of AI should emphasise these points. AI is a tool for your people to use, not a substitute for their efforts.

Elixir and Erlang machine learning

As businesses become familiar with AI and ML tools, they may start creating their own, tailored to their specific needs and circumstances. Organisations that develop and modify AI and ML tools increasingly do so using Elixir, a programming language based on the Erlang Virtual Machine (VM).  

Elixir is perfect for creating scalable AI applications for three core reasons:

  • Concurrency: Elixir is designed to handle lots of tasks simultaneously, which is ideal for AI applications that have to process large amounts of data from different sources. 
  • Functional programming: Elixir focuses on function – reaching a desired goal as simply as possible. That’s perfect for AI because the simpler your AI algorithms, the more reliable they are likely to be.
  • Distributed computing: AI applications demand significant computational resources that developers spread across multiple machines. Elixir offers in-built distribution capabilities, making distributed computing straightforward.

In addition, Elixir is supported by a wide range of libraries and tools, providing ready-made solutions to challenges and shortening the development journey. 

The result is AI applications that are efficient, scalable and reliable. That’s hugely important because as AI and ML become ever more crucial to business success, effective applications and processes will become a fundamental business differentiator. AI isn’t something you can ignore. If you aren’t already, start thinking about your own AI and ML strategy today.   

Want to know more about efficient, effective AI development with Elixir? Talk to us.

The post What businesses should consider when adopting AI and machine learning appeared first on Erlang Solutions.

by Cara May-Cole at August 31, 2023 09:29

August 28, 2023

Ignite Realtime Blog

CVE-2023-32315: Openfire vulnerability (update)

A few months ago, we published details about an important security vulnerability in Openfire that is identified as CVE-2023-32315.

To summarize: Openfire’s administrative console (the Admin Console), a web-based application, was found to be vulnerable to a path traversal attack via the setup environment. This permitted an unauthenticated user to access restricted pages in the Openfire Admin Console reserved for administrative users.

Leveraging this, a malicious actor can gain access to all of Openfire, and, by extension (through installing custom plugins), much of the infrastructure that is used to run Openfire. The Ignite Realtime community has made available new Openfire releases in which the issue is addressed, and published various mitigation strategies for those who cannot immediately apply an update. Details can be found in the security advisory that we released back in May.

In the last few days, this issue has seen a considerable increase in exposure: there have been numerous articles and podcasts that discuss the vulnerability. Many of these seem to refer back to a recent blogpost by Jacob Banes at Vulncheck.com, and those that do not seem to include very similar content.

Many of these articles point out that there’s a “new way” to exploit the vulnerability. We indeed see that there are various methods being used, in the wild, in which this vulnerability is abused. Some of these methods leave less traces than others, but the level of access that can be obtained through each of these methods is pretty similar (and, sadly, similarly severe).

Given the renewed attention, we’d like to make clear that there is no new vulnerability in Openfire. The issue, solutions and mitigations that are documented in the original security advisory are still accurate and up to date.

Malicous actors use a significant amount of automation. By now, it’s almost safe to assume that your instance has been compromised if you’re running an unpatched instance of Openfire that has its administrative console exposed to the unrestricted internet. Tell-tale signs are high CPU loads (of crypto-miners being installed) and the appearance of new plugins (which carry the malicious code), but this by no means is true for every system that’s compromised.

We continue to urge everyone to update Openfire to its last release, and carefully review the security advisory that we released back in May, to apply applicable mitigations where possible.

For other release announcements and news follow us on Twitter and Mastodon.

1 post - 1 participant

Read full topic

by guus at August 28, 2023 08:21

August 24, 2023

Erlang Solutions

Future-proofing legacy systems with Erlang

Relying on outdated legacy systems often serves as the biggest hindrance to both innovation and optimisation for businesses today. Since many of these systems have been used for years, if not multiple decades, the significant costs involved with replacing a system entirely are rarely within budgets, particularly in today’s business climate.

But that doesn’t mean legacy systems should be left as is. Erlang is a resilient and proven high-level programming language that, when utilised effectively, can improve current legacy systems to ensure they remain both secure and efficient in the future.

Why Should I Future-Proof My Legacy System Instead of Replacing It?

The vast majority of companies are still relying on legacy systems due to the speed at which technology continues to advance. As an example, in 2021 91% of UK financial institutions were still operating on at least some form of legacy infrastructure.

legacy systems infrastructure

Occasionally, the knee-jerk reaction to the idea of legacy systems is that they should be completely replaced, but this is rarely the ideal solution. Entirely replacing a legacy system often involves completely changing the way your business operates – for starters, designing a functional system can take years to both design and implement. After these lengthy phases have been completed, all employees will then still require training on a new system so it can be used appropriately.

Choosing instead to future-proof your current system can therefore be a far more cost-effective solution. Updating a system using a language like Erlang also means that improvements can be incremental, which saves on employee training costs.

As long as a legacy system is future-proofed properly, your company will also access the same benefits you would have if you’d designed an entirely new system. That typically means a more secure infrastructure, a more reliable network, and a more efficient way of managing your operations.

Using Erlang to Effectively Future-Proof Legacy Systems

Erlang offers a number of notable benefits when applied to legacy system improvements and future-proofing across multiple different sectors. The below represents some of the core ways in which Erlang serves as the ideal programming language for these applications.

Restructuring with Erlang Can Increase Availability and Reduce Downtime

Erlang can be utilised to establish and bolster core systems, a use case Erlang Solutions achieved with FinTech unicorn Klarna. Our team assisted Klarna by future-proofing their over 10-year-old payment system. 

By using Erlang, Klarna was awarded the flexibility to adapt its system, incorporating a number of additional tech stacks like Scala, Clojure, and Haskell. In this new combined approach, they were able to increase the overall availability of their system considerably whilst reducing their downtime to zero.

You can find out more about this particular Erlang success story in our previous blog post here.

Downtime reduction is essential for business continuity planning today. Outages, often caused by outdated legacy systems, have been on the rise over the past few years, alongside the costs incurred by these issues. By future-proofing your system with Erlang, your business can protect against these risks by building more resilient infrastructure.

Erlang Allows Legacy Systems to Scale

If a system isn’t able to appropriately scale, it will inevitably become obsolete once your business grows big enough, or technology advances long past its capabilities. As shown by the above Klarna success story, Erlang enables a system to become more scalable thanks to both its flexibility and reliability as a programming language. 

Erlang is also highly-scalable because it’s able to accommodate large data volumes, which is crucial in today’s data-driven economy. This makes it a particularly adept solution for legacy systems used in data-heavy industries, like the financial and healthcare sectors.

Creating a Flexible and Adaptable Legacy System with Erlang

By updating the flexibility and adaptability of legacy systems, Erlang enables greater tech integration whilst ensuring your business is fully prepared for whatever the future might present.

This adaptability means that your legacy system can also access additional improvements through other technologies. A great example is the ability to integrate RabbitMQ to further scale your system, due to it being an open-source message-broker written in Erlang.

Using Erlang to Increase Legacy System Security 

One of the main concerns with an outdated system is that it leaves business-critical data and applications vulnerable to cybercrime. In 2020, an estimated 78% of businesses were still using outdated legacy systems to support business-critical operations and to hold sensitive data.

Erlang Allows Your Legacy System to Become More Innovative, Without Replacing it Entirely

Innovation continues to be a desirable trend for many businesses, as a means of differentiating from competitors and securing business growth in increasingly tech-focused markets. As a versatile language, Erlang can facilitate improved innovation in your legacy system.

Much of this can be achieved by adapting to new technologies dependent on your business needs. But by refactoring legacy systems, companies are also able to provide more innovative services for customers. Erlang Solutions achieved this with OTP Bank, utilising LuErl alongside an improved and future-proofed banking infrastructure to create a modern banking system for the largest commercial bank in Hungary.

Future-Proofing Your Legacy System with Erlang Solutions

In 2023 and beyond, it’s of paramount importance that your business considers how your legacy system can be improved, regardless of the sector you operate within.

The Erlang Solutions team has decades of experience using Erlang to improve legacy systems environments, whilst assessing the potential for further innovations with Elixir, RabbitMQ and other solutions. 
If you’d like us to assess how Erlang could improve your system, or you’d like to find out more about the process, please don’t hesitate to contact our team directly.

The post Future-proofing legacy systems with Erlang appeared first on Erlang Solutions.

by Cara May-Cole at August 24, 2023 07:20

August 15, 2023

Profanity

Profanity 0.14.0

Apologies for the late blog post. We have good news though! Two weeks ago we released Profanity 0.14.0!

13 people contributed to this release: Daniel Santos, @DebXWoody, @H3rnand3zzz, @ike08, @MarcoPolo-PasTonMolo, @mdosch, @pasis, @paulfertser, @shahab-vahedi, @sjaeckel, @techmetx11, @thexhr and @jubalh.

Also a big thanks to our sponsors: @mdosch, @LeSpocky, @jamesponddotco and one anonymous sponsor!

We introduced a new /privacy command which should make it easier to find all privacy related settings and we introduced vCard support (XEP-0054)!

With /plugins install we have now a more convenient way to directly install plugins from the web.

Sharing of PGP keys got easier with the /pgp sendpub and /pgp autoimport commands. This is compatible with PSI and Pidgin, but doesn’t have an official XEP.

You can configure libstrophe internal related settings via the new /strophe command.

There are plenty of more fixes and improvements. For a list of changes please see the 0.14.0 release notes or git history.

August 15, 2023 23:00

August 11, 2023

The XMPP Standards Foundation

The XMPP Newsletter June & July 2023

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of June & July 2023. Many thanks to all our readers and all contributors!

Like this newsletter, many projects and their efforts in the XMPP community are a result of people’s voluntary work. If you are happy with the services and software you may be using, please consider saying thanks or help these projects! Interested in supporting the Newsletter team? Read more at the bottom.

XMPP and Google Summer of Code 2023

The XSF has been accepted again as hosting organisation at the GSoC 2023 and receive two slots for XMPP Contributors!

On Dino:

On Moxxy:

  • The first blog post, detailing the plan to implement basic group chat functionality.
  • The second blog post, describing the plans to bring a basic implementation of XEP-0045 into Moxxy’s XMPP library moxxmpp.
  • The third blog post, sketching the frontend implementation plan.
XSF and Google Summer of Code 2023

XSF and Google Summer of Code 2023

XSF fiscal hosting projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects:

XMPP Events

Talks

  • Une messagerie instantanée qui respecte vos libertés ?[FR]: Through a brief history of the web, in order to depict its current centralization and its problems, Adrien Bourmault, member of the XMPP Standards Foundation, will introduce you to the problems problems posed by non-free instant messaging, based on centralized applications and services. He will also explore the solutions offered by decentralization and free software with XMPP. See the video below.

Videos

Articles

Software news

Clients and applications

Servers

  • The ejabberd mod_s3_upload module gained support for the use of a separate download host. This allows clients to download media content from a statically hosted S3 bucket. Initially, this feature was proposed to allow ejabberd to integrate with Garage, an open-source distributed object storage service tailored for self-hosting.

Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • Reporting Account Affiliations
    • This specification documents a way for an XMPP server to report to other entities the relationship it has with a user on its domain.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable

  • No XEP moved to stable this month.

Deprecated

  • No XEP deprecated this month.

Spread the news!

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Licaon_Kter, Ludovic Bocquet, melvo, MSavoritias (fae,ve), nicola, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • German: xmpp.org and anoxinon.de
    • Translators: Jeybe, wh0nix
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: daimonduff, TheCoffeMaker

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

August 11, 2023 00:00

August 10, 2023

Erlang Solutions

5 ways Elixir programming can improve business performance

Elixir is a simple, lightweight programming language that is built on top of the Erlang virtual machine. It offers straightforward syntax, impressive performance and a raft of powerful features. It uses your digital resources in the most efficient way.

This is all very well, but what does that mean in practice? Aside from impressing your web development team, what can Elixir do for your business?

In this blog, we’ll look at how Elixir’s benefits translate into competitive advantage, and how it helps you reduce cost, time to market and process efficiency. Or to put it another way, here are five ways Elixir programming can improve business performance.

The joys of simplicity 

In comparison to other programming languages, Elixir is relatively simple to master. It borrows from other languages, which means experienced programmers tend to pick it up quite quickly. It uses easy-to-grasp syntax, which helps developers work more quickly using less code. 

Essentially, Elixir focuses on function. It’s all about getting the desired result in the simplest possible way. 

What does that mean for your business? Most obviously, Elixir programming can speed up the development of new software or updates to existing applications. It allows developers to do more with less, which is a major advantage during a talent shortage. 

In addition, it tends to produce more reliable programmes. Simple code is easier to debug, while complexity increases the chances of something going wrong.

The virtues of concurrency

Concurrency is the ability to handle lots of tasks at the same time, and Elixir is designed with concurrency at its core. 

It’s a lightweight language, which means it runs processes in a highly resource-efficient way. With processes that typically use less than 1KB of RAM, running lots of them simultaneously is no problem for Elixir-based applications.

But why is this beneficial in wider business terms? Well, it means applications can handle large numbers of users and instructions without slowing down, creating better experiences. Concurrency also improves reliability. Concurrent processes run independently of each other, which means a problem with one will not affect the performance of the rest. 

Making the most of your resources

As we’ve seen, Elixir helps you make the most of your human resources. At the same time, it also helps you to fully utilise your digital ones.

Elixir automatically uses all of the processing capacity at its disposal. If that capacity increases, it will utilise the extra resources without requiring your developers to write lots of new code. Put simply, Elixir-based applications always tend to run at the optimal performance for their hardware environment.  

This has a number of advantages for your business. Applications make full use of available processing power to provide faster, smoother user experiences. From an ROI point of view, it means none of your expensive IT kit goes to waste.

By using resources in an optimal way, Elixir also makes it easier to quickly scale your software and applications. More about that below. 

Elixir means simple scalability

Scalability is everything in a digital world. Can your applications serve ten thousand users as easily as 500, without any drop in performance? How about 100,000? 

In other words, can they grow as your business grows? And how easy is it to add capacity quickly, so that you can grasp new opportunities before they disappear?

Elixir is designed for easy scalability. As we’ve seen, it automatically makes full use of your hardware resources, making it the perfect language for applications that experience frequent spikes in demand. The lightweight nature of Elixir processes also means you can grow to a significant size with limited processing power.

But when you do reach the limits of your current hardware, Elixir’s adeptness with concurrency makes it easy to share workloads across a cluster of machines. 

An Elixir-based application can run multiple processes on a single machine. Alternatively, it can run millions of processes across lots of machines, creating a highly scalable environment. Elixir creates seamless communication channels between all the elements of a distributed system, which further encourages the efficient use of resources.

You can do a lot of this with other programming languages, but not generally so easily. You may need to integrate third-party tools, for example. That adds to the cost, development hours and time to market. The beauty of Elixir is that scalability is built in.

Elixir in practice

When you add all this together, it creates significant performance advantages. Everything we’ve discussed so far – simplicity, concurrency, optimal resource use and scalability – coalesce to create more robust, efficient and future-proof applications. 

This is highlighted by our work with Bleacher Report, the second-largest sports website in the world.

Bleacher report Elixir

At peak times, the site’s mobile app serves over 200,000 concurrent requests. Bleacher Report wanted to transition the app from Ruby to Elixir, after it started presenting technical challenges at scale. That transition, supported by Erlang Solutions, created a much faster, more scalable application that was also significantly less resource intensive. The system now easily handles over 200 million push notifications per day. The number of servers it needs has reduced from 150 to eight. If you’d like to know more about that transformation, and what we did to support it, you can read the full case study here.

The Elixir business performance benefit

As we’ve seen, programming language matters. It matters for performance, cost, speed to market and scalability. Elixir, based on the powerful Erlang engine, produces positive results in all these areas.

Elixir is still a relatively young language – barely a decade or so old, in fact – but that is to its benefit. It was designed for the challenges that digital-first businesses face today, and not for an era when the mobile internet was still in its infancy. It is already used by scores of top-tier companies to create better user experiences – and to create them faster and at scale.

Want to know more about efficient, effective development with Elixir, and how it can enhance your own business performance? Why not drop us a line?

The post 5 ways Elixir programming can improve business performance appeared first on Erlang Solutions.

by Cara May-Cole at August 10, 2023 07:00

August 09, 2023

Snikket

State of Snikket 2023: The Apps

As promised in our introduction to the series, welcome to the first of our ‘State of Snikket’ update posts! This installment features all the app development news you could wish for.

So what’s new in the world of Snikket apps?

UI/UX

If you’ve been following Snikket development for a while, you might remember that we were receiving UX advice on making our apps easier and more fun to use, thanks to the team at Simply Secure. Recently they’ve been busy with a UX transformation of their own, including renaming themselves SuperBloom. From the blog post announcing this on their website:

“A superbloom is a rare event, when long-dormant wildflower seeds bloom together to transform a harsh landscape with renewed energy and resilience. We believe technology design is at a Superbloom inflection point, and we’re excited to be shaping it into a beautiful future.”

We’re pleased to have them working with us again, creating some mock-ups (or “wireframes”) for updating the look and feel of the Snikket apps.

These will guide the ongoing evolution of our existing Android and iOS apps, and we plan to use them when we begin prototyping web and desktop clients, using the new Snikket SDK (more on that later).

So what about the existing apps?

iOS

The release of our iOS app was the last big news we shared on the app front, and Snikket iOS was warmly welcomed! However it’s also fair to say it has had a couple of teething problems and is still a bit less polished than our Android app.

Some of these issues are due to various constraints in iOS, requiring apps to be designed very differently to apps on other platforms. We have also had difficulties finding people who are familiar with both XMPP and iOS development, and who have time and motivation to work with us on Snikket for iOS.

Nevertheless, we have a good relationship with the developers of Siskin - which our iOS app is based on - and we’ll continue to work on improving it. If you’re keen to help, we’re always looking for additional beta testers.

Android

Meanwhile our Android app, the first app we released, continues to be widely used. It derives from Conversations by Daniel Gultsch (iNPUTmice, also creator of the Ltt.rs email app for Android). We also maintain a good relationship with Daniel, and keep a close eye on upstream improvements.

In fact, our app follows Conversations so closely that maintaining it as a build flavour upstream is under consideration as a potential option for the future. That would automate some of the work of releasing new versions, allowing us to bring new Conversations features and bug fixes to people using Snikket more quickly.

Speaking of new features, the release of Conversations 3.0 will come with a whole Santa sack of them (nitty-gritty technical details here), which will eventually make their way into Snikket on Android.

Some of the anticipated features include emoji reactions, multimedia messages, improved message editing (including edit histories), and full support for replies, which Daniel says will include allowing us to jump to the original message that was replied to.

Another big change is in the handling of attachments, such as photos and files sent in chats. Once Snikket is rebased on Conversations 3.0 these will be invisible to other apps on your device, unless and until you choose to export them. Just as you’d expect when they arrive in chats that are end-to-end encrypted, to protect your privacy.

One change that we’re really excited about will finally bring the concept of Snikket’s circles to the app’s interface. This will allow people to easily filter their chats, for example between “Family”, “Friends” and “Work”. If you join a circle - for example one called “Family” - everyone in the Family circle will automatically be added to your contact list, and you’ll be added to theirs.

After Conversations 3.0 is released, we’ll be able to group the chats associated with each circle together in your contact list, rather than having them all mixed together as they are now. Once the new interface arrives, you can safely share dank memes with your gaming friends in the “Game Night” circle, with confidence they won’t be accidentally shared with your family.

So when will all these new features arrive?

Initial plans aimed for a November release, but it’s well established that software development can be unpredictable. Especially in the open-source world where maintainers are often stretched between many responsibilities. So even if it takes a bit longer to spit and polish, we’re not worried - and we’re fairly confident the first version of Snikket based on it will be appearing next year. Watch this space!

Okay, we’ve covered Android and iOS. So what about these web and desktop apps we’ve listed as a goal of ours for some time?

With more development time becoming available (more on that in a future post), we’ve been exploring how we might finally make these a reality.

The future of building Snikket apps

One such exploration has resulted in a prototype ‘Snikket SDK’ (Software Development Kit).

“A what now?”

Basically, it’s a cross-platform library that can handle all the digital smoke signals involved in communicating with an XMPP server. It presents developers familiar with other chat APIs with an expert smoke signal interpreter, which they can connect to any chat app interface they design.

Our hope is that this will make it easier to develop Snikket clients for the web, desktop, and potentially other platforms. This includes mobile GNU/Linux devices like the PinePhone, used with interfaces like Phosh by distros like PureOS, Mobian, and postmarketOS.

If this works out, whenever we make improvements to the SDK they can easily be shared by all the apps using it, massively reducing the work involved in supporting apps for an increasing number of platforms. But let’s not get ahead of ourselves.

So far its an early prototype - we haven’t even made a final decision on programming language yet.

Currently, we’re experimenting with Haxe, which can be compiled to a number of other languages, including JavaScript. Using this approach will allow us to build on existing XMPP libraries for the target platforms.

By providing an easy-to-use development kit with all Snikket’s features already implemented, we hope to make it easier for per-platform development to focus on just the UI/UX layer, instead of getting dragged down reimplementing XMPP and business logic for every platform.

It’s important to note that we are not aiming to produce another XMPP library - many of those already exist. Rather, we’re focusing on a layer above that - an SDK that allows developers to easily work with a Snikket (or compatible XMPP server) with zero knowledge of how XMPP works.

We’ll share additional progress as it happens, so once again, watch this space!

That’s all the news we’ve got for today.

The next post will focus on the work we’ve been doing to set up hosting of Snikket servers as a human-friendly subscription service, and an ethical source of ongoing funding for Snikket development. It will also cover how Snikket has been funded so far and what we’ve been spending the money on.

After that, we’re planning to take you on a deep dive into new laws like the Digital Markets Act in the EU - and a similar ones in the UK and elsewhere - and how they could impact social enterprises like Snikket, developing Free Code software for use in decentralised networks. There’s potentially some good news here and some rather worrying news.

So keep an eye out for those over the coming weeks.

by Snikket Team (team@snikket.org) at August 09, 2023 14:05

State of Snikket 2023

This is our first blog post for quite a while, and the last few have all been technical updates of various kinds about the Snikket software. In fact it’s been almost two years since the last post that gave a general progress update on the Snikket project itself, so let’s fix that!

You’ll be pleased to hear that Snikket is very much alive, and although there hasn’t been much of a show to see here, a bunch of stuff has been going on backstage.

We plan to catch you up with our progress and various other topics through a series of upcoming blog posts. A number of these are inspired from questions we receive often, others are related to updates in the project, or changes in the industry and ecosystem which Snikket is a part of.

Rather than cram a diverse range of topics into a single post, we’re going to break it up a little. Over the coming weeks, we’ll answer questions such as:

  • What have we been working on over the last year?

  • What is the status of the Android and iOS apps?

  • What about the web and desktop apps we’ve been promising?

  • What did JMP.chat launch and what does that have to do with Snikket?

  • Where has funding come from to keep the lights lit at Snikket HQ?

  • What are the longer term plans for project funding?

  • What’s this Digital Markets Act thingy, is it good or bad, and what implications does it (and other similar laws in the pipeline) have for the future of Snikket, XMPP, interoperability of chat apps, and decentralised online services more generally?

  • What did we get up to at the recent FOSSY conference in Portland, US?

  • What kind of test devices do we use and where do they come from?

Curious? Our first post is live, and it’s about the app development. Jump right in to State of Snikket 2023: The Apps!

by Snikket Team (team@snikket.org) at August 09, 2023 14:00

August 08, 2023

Gajim

Gajim 1.8.1

Gajim 1.8.1 brings improvements for file previews, a default encryption setting, and many small improvements and fixes. Thank you for all your contributions!

What’s New

Gajim 1.8.1 introduces a new setting for default encryption. It is not set by default yet, but you can enabled it yourself and test how it works for you. As soon as we gathered enough experience, a default encryption (e.g. OMEMO) may be set by Gajim.

OMEMO Logo, by fiaxh - https://github.com/siacs/Conversations/blob/master/art/omemo_logo.svg, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=46134840

OMEMO Logo, by fiaxh - https://github.com/siacs/Conversations/blob/master/art/omemo_logo.svg, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=46134840

Gajim generates a preview for files sent/received in your chats, if it isn’t disabled in preferences. As soon as a file transfer is detected, Gajim generates a preview UI and then shows a preview. This preview UI now shows a loading icon, which reduces “jumping” of messages around a file transfer when waiting for the actual preview. A preview UI will also be shown if previews are completely disabled (just without the actual preview). This brings you the same buttons and actions for every file, regardless of preview preferences.

What else changed:

  • Improvements for the interaction of search view and group chat participants list
  • Bug fixes for the certificate viewer
  • Bug fixes for avatar selection
  • Fixes for Gajim’s data form display
  • Many small improvements and fixes

Have a look at the changelog for a complete list.

Gajim

As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab.

August 08, 2023 00:00

August 04, 2023

The XMPP Standards Foundation

Elbe-Sprint Hamburg 2023: Post-Sprint Summary

Elbe-Sprint 2023: Post-Sprint Summary

In June the Elbe-Sprint 2023 took place in Hamburg and it was a great first experience after all the pandemic in the past years for many participants. In this blog post we want to summarize progress we’ve made during the sprint.

First of all, many thanks to the CCCHH and their members for offering the opportunity and their space in Hamburg-Altona. It was a great location and it served the purpose well.

We met on Thursday night for dinner and got some delicious pizza and Italian food. We had a short welcoming round and exchange on XMPP topics followed immediately. Afterwards, we met at a small park area. Accidentially, there was a small festival called “Altonale - Festival of the cultural Future” and welcoming open seating options on the grass ground. As the festival title says, it was organised around discussing future topics - the perfect place to kick of the XMPP Elbe Sprint!

Developers at the location where the Elbe-Sprint takes place

Developers at the location where the Elbe-Sprint takes place

Then, on Friday we kicked off at 10:00 am in the morning with a short presentation of what everyone plans to work on. The developers and topics present allocated around ANIS update, Conversations 3.0 and lttrs, Dino, PGPainless and XMPP Providers. The night we spent at a Kurdish & Turkish restaurant at the city and finished nearby the river Elbe enjoying the scenery.

Developers trying to find the right node :-)

Developers trying to find the right node :-)

Moving on after a first good day we continued working on Saturday. After the lunch break we had three presentations: One on ANIS update, then an XMPP introduction talk so anyone who is interested can join and ask about it, and finally a status update on PGPainless. The night we spent in an Indian restaurant in the famous St. Pauli quarter of Hamburg as a little highlight. As in many other countries, the idea of sharing the food served seemed to be the best choice for the knowledge, the XMPP protocol and the technology we are developing and last but not least how we work together.

Developers sharing knowlege &amp; dinner in an India restaurant

Developers sharing knowlege & dinner in an India restaurant

After the final dinner, we went out for a walk and took a boat drive along the river Elbe and watched the sunset and scenery together. We finished with another round of drinks at the Festival where we started:

How do we want to build our (communication) future?

Developers happy enjoying a boat ride on a Elbe ferry

Developers happy enjoying a boat ride on a Elbe ferry

At Sunday we concluded the Elbe-Sprint after a pre-lunch working phase and a small closing ceremony where everybody summarized what they have been working on and what they achieved during the sprint.

Developers proceeding with their final cherry-picking :-)

Developers proceeding with their final cherry-picking :-)

As you read, we were not only focusing on work, we also spent a decent amount of time doing social events and personal exchange on XMPP but also many things of life around. That is what many seem to enjoy a lot during sprints, too.

Developers being served with stickers &amp; melons :-)

Developers being served with stickers & melons :-)

See you at the next sprint hopefully,
Eddie — The organizer

August 04, 2023 00:00

August 03, 2023

Erlang Solutions

Blockchain in Sustainable Programming

The benefits of blockchain implementation across multiple sectors are well-documented, but how can this decentralised solution be used to achieve more sustainable programming?

As the effects of the ongoing climate crisis continue to impact weather patterns and living conditions across the planet, we must continue to make every aspect of our lives, from transport and energy usage to all of our technology, greener and more sustainable.

Sustainable programming and green coding practices play a crucial role in this transition. These concepts exist to help both coding and programming become as environmentally efficient as possible, by minimising the energy consumption of these processes. But sustainable programming also means ensuring that the solutions we program can be used to promote and achieve sustainable goals in the future.

Blockchain still represents a relatively new technology, but as its capabilities continue to expand, so does its role in creating a greener industry for programmers as well as the wider tech sector. To better understand this, it’s important to first understand how blockchain solutions, supported by Erlang, are utilised today.

How is Erlang Promoting Innovation in the Blockchain Space?

Though a complex technology to define, blockchain is essentially a more secure, decentralised way for companies and organisations to record transactions. Both time-stamping and reference links, as well as the ability for anyone with access rights to track transactions but not alter them, provide an opportunity for blockchain to completely change the way organisations handle data. This is achieved through the 6 main blockchain principles.

As a coding language, Erlang represents the ideal foundation for blockchain solutions thanks to several key benefits. Firstly, Erlang is a high-level, functional language that can be quickly deployed, which is often a necessity in the fast-moving, competitive markets that blockchain is usually deployed within, like fintech.

Both Erlang and Elixir also don’t manipulate memory directly, which means they’re immune to many traditional vulnerabilities. This ensures they’re able to offer safer, more secure blockchain solutions.

Companies are also opting to use Erlang for blockchain due to its high availability, resiliency, and the fact that its massively concurrent among other benefits. You can learn more about each of these, as well as the benefits mentioned above, in our recent tech deep dive.

Once established, Erlang blockchains can be used to better support sustainable programming initiatives in several key ways.

Contemporary and Sustainable Uses for Blockchain Technology

It’s important to first note that early blockchain implementations consumed a lot of energy to power their decentralised network, which raised concerns regarding its sustainability as a solution. However, continued innovations have allowed companies to limit this issue considerably.

Ethereum, currently the world’s second-largest blockchain by market cap, were able to cut the energy consumption of its network by 99.9% late last year. 

blockchain sustainable programming

Source: Statistica

This was achieved thanks to a switch from a Proof of Work (PoW) chain to a new Proof of Stake (PoS) approach, which Ethereum called “The Merge”. There are continued hopes that similar changes in the green coding behind the blockchain can access further energy consumption reductions in the future. 
PwC has created what’s known as its Blockchain Sustainability Framework to promote these improvements and to ensure future efforts can further support sustainable programming. In addition to these energy consumption improvements, blockchain can also work sustainably for businesses, governments and organisations in a myriad of other ways.

Blockchain’s Role in Sustainable Infrastructure and Renewable Energy

The UK’s OECD (Organisation for Economic Co-operation and Development) recently published a case study analysing how blockchain technologies can serve as a digital enabler for sustainable infrastructure. 

They confirmed that sustainable infrastructure services are already being impacted by blockchain technology and that its core competencies could be used in several case studies, from monitoring infrastructural standards to optimising emissions certificate trading systems.

The UN’s Environment Programme published a similar article last year, evidencing blockchain’s role in fighting the ongoing climate crisis. Several businesses worldwide have already used blockchain to support renewable energy projects and to reduce their future energy costs.

Improved Monitoring of Supply Chains Through Blockchain Smart Contracts

If companies can implement smart contracts, stored on blockchain technology, opportunities exist to better track and automate their supply chain logistics. Smart contracts are programs that run automatically and securely through the blockchain once certain pre-determined conditions have been met.

This often removes the need for intermediaries, considerably reducing the time taken on signing agreements. It also then provides an automated, optimised way to manage stock, conduct peer-to-peer transactions or manage a supply chain.
But these contracts can also be used to achieve greener outcomes through the sustainable programming potential of blockchain. By creating smart contracts, companies can track the performance of supply chains, creating clear data on environmental impacts. This data can be monitored, and operational improvements can be made to reduce these emissions. Often, these changes can also cut costs in addition to creating a more sustainable supply chain.

Tracking Sustainability Metrics Through Blockchain

Further to supply chain impacts, blockchain technology allows companies to track a number of different sustainability metrics, such as carbon emissions, renewable energy credits, waste reduction and other variables.

All of these metrics can be tracked in real-time, creating actionable data which companies can use to become more sustainable and further optimise their business practices. 
As blockchain technology continues to advance, new monitoring solutions, such as the ability to track plastic production or water usage, will enable both more detailed data and the capacity to implement even more sustainable changes in areas like manufacturing and product design. These improvements could benefit both companies and governments shortly.

Accessing Blockchain Benefits with Erlang Solutions

Blockchain’s role in sustainable programming will only continue to grow as the technology develops. Companies should be looking to build the foundation of their blockchain efforts today, to continue to access these benefits shortly.

Our team at Erlang Solutions continues to work to unlock its potential in blockchain advancements for companies worldwide. 

To find out more about how Erlang Solutions blockchain support could help your company achieve more sustainable programming, contact our team today.

The post Blockchain in Sustainable Programming appeared first on Erlang Solutions.

by Cara May-Cole at August 03, 2023 10:15

July 27, 2023

Erlang Solutions

Ship RabbitMQ logs to Elasticsearch

RabbitMQ is a popular message broker that facilitates the exchange of data between applications. However, as with any system, it’s important to have visibility into the logs generated by RabbitMQ to identify issues and ensure smooth operation. In this blog post, we’ll walk you through the process of shipping RabbitMQ logs to Elasticsearch, a distributed search and analytics engine. By centralising and analysing RabbitMQ logs with Elasticsearch, you can gain valuable insights into your system and easily troubleshoot any issues that arise.

Logs processing system architecture

To build that architecture, we’re going to set up 4 components in our system. Each one of them has got its own set of features. Here there are:

  • A logs Publisher.
  • A RabbitMQ Server With a Queue To Publish data to and receive data from.
  • A Logstash Pipeline To Process Data From The RabbitMQ Queue.
  • An Elasticsearch Index To Store The Processed Logs.
Components of building logs

Installation

1. Logs Publisher

Logs can come from any software. It can be from a web server (Apache, Nginx), a monitoring system, an operating system, a web or mobile application, and so on. The logs give information about the working history of any software. 

If don’t have any choices yet, you can use my simple stuff here: https://github.com/baoanh194/rabbitmq-simple-publisher-consumer

2. RabbitMQ

The logs publisher will be publishing the logs to a RabbitMQ queue.

Instead of going through a very long RabbitMQ installation, we’re going to go with a RabbitMQ Docker instance to make things simple. You can find your preferred operating system here: https://docs.docker.com/engine/install/

To start a RabbitMQ container. You can do this by running the following command:

RabbitMQ container command

This command starts a RabbitMQ container with the management plugin enabled. After enabling the plugin, you can access the RabbitMQ management console by going to http://localhost:15672/ in your web browser. Normally the username/password is guest/guest.

RabbitMQ container

3. Elasticsearch

Go and check this link to install and configure Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html

To store RabbitMQ data for visualisation in Kibana, you need to start an Elasticsearch container. You can do this by running the following command (I’m using Docker to set up Elasticsearch):

Elasticsearch comand

When you start Elasticsearch for the first time, there are some security configuration required.

4. Logstash

If you haven’t installed or worked with Logstash before, don’t worry. Have a look at the Elastic docs: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

It’s very detailed and easy to read.

For me, I installed Logstash on MacOS by Homebrew:

Logstash on MacOS

Once Logstash is installed on your machine, let’s create the Pipeline to process data.

Paste the code below to your pipelines.conf file:

(Put new config file under: /opt/homebrew/etc/logstash)

Pipeline on Logstash

Run your pipeline with Logstash:

Run pipeline in Logstash

Here is a screenshot of what you should get if your RabbitMQ Docker Instance is running well and everything works pretty well on your Logstash pipeline side:

Logstash Pipeline

Let’s ship some logs

Now everything is ready. Go to the logs publisher root folder and run the send.js script

send.js script

You can check  the data is sent to Elastic:

curl -k -u elastic https://localhost:9200/_search?pretty

If everything goes well, you will get the result as below screenshot:

Elastic

Configure Kibana to Visualize RabbitMQ Data

Additionally, you can configure Kibana to visualize the RabbitMQ data on Elastic. By configuring Kibana, you can create visualisations such as charts, graphs, and tables that make it easy to understand the data and identify trends or anomalies. For example, you could create a chart that shows the number of messages processed by RabbitMQ over time, or a table that shows the top senders and receivers of messages.

Kibana also allows you to build dashboards, which are collections of visualisations and other user interface elements arranged on a single screen. Dashboards can be shared with others in your organization, making it easier for team members to collaborate and troubleshoot issues. You can refer to this link for how to set up Kibana: https://www.elastic.co/pdf/introduction-to-logging-with-the-elk-stack

Conclusion

In summary, shipping RabbitMQ logs to Elasticsearch offers benefits such as centralized log storage, quick search and analysis, and improved system troubleshooting. By following the steps outlined in this blog post, you can set up a system to handle large volumes of logs and gain real-time insights into your messaging system. Whether you’re running a small or large RabbitMQ instance, shipping logs to Elasticsearch can help you optimise and scale your system.

The post Ship RabbitMQ logs to Elasticsearch appeared first on Erlang Solutions.

by Bao Hoang at July 27, 2023 10:01

July 25, 2023

Paul Schaub

PGPainless meets the Web-of-Trust

We are very proud to announce the release of PGPainless-WOT, an implementation of the OpenPGP Web of Trust specification using PGPainless.

The release is available on the Maven Central repository.

The work on this project begun a bit over a year ago as an NLnet project which received funding through the European Commission’s NGI Assure program. Unfortunately, somewhere along the way I lost motivation to work on the project, as I failed to see any concrete users. Other projects seemed more exciting at the time.

NLnet Logo
NGI Assure Logo

Fast forward to end of May when Wiktor reached out and connected me with Heiko, who was interested in the project. We two decided to work together on the project and I quickly rebased my – at this point ancient and outdated – feature branch onto the latest PGPainless release. At the end of June, we started the joint work and roughly a month later today, we can release a first version 🙂

Big thanks to Heiko for his valuable contributions and the great boost in motivation working together gave me 🙂
Also big thanks to NLnet for sponsoring this project in such a flexible way.
Lastly, thanks to Wiktor for his talent to connect people 😀

The Implementation

We decided to write the implementation in Kotlin. I had attempted to learn Kotlin multiple times before, but had quickly given up each time without an actual project to work on. This time I stayed persistent and now I’m a convinced Kotlin fan 😀 Rewriting the existing codebase was a breeze and the line count drastically reduced while the amount of syntactic sugar which was suddenly available blow me away! Now I’m considering to steadily port PGPainless to Kotlin. But back to the Web-of-Trust.

Our implementation is split into 4 modules:

  • pgpainless-wot parses OpenPGP certificates into a generalized form and builds a flow network by verifying third-party signatures. It also provides a plugin for pgpainless-core.
  • wot-dijkstra implements a query algorithm that finds paths on a network. This module has no OpenPGP dependencies whatsoever, so it could also be used for other protocols with similar requirements.
  • pgpainless-wot-cli provides a CLI frontend for pgpainless-wot
  • wot-test-suite contains test vectors from Sequoia PGP’s WoT implementation

The code in pgpainless-wot can either be used standalone via a neat little API, or it can be used as a plugin for pgpainless-core to enhance the encryption / verification API:

/* Standalone */
Network network = PGPNetworkParser(store).buildNetwork();
WebOfTrustAPI api = new WebOfTrustAPI(network, trustRoots, false, false, 120, refTime);

// Authenticate a binding
assertTrue(
    api.authenticate(fingerprint, userId, isEmail).isAcceptable());

// Identify users of a certificate via the fingerprint
assertEquals(
    "Alice <alice@example.org>",
    api.identify(fingerprint).get(0).getUserId());

// Lookup certificates of users via userId
LookupAPI.Result result = api.lookup(
    "Alice <alice@example.org>", isEmail);

// Identify all authentic bindings (all trustworthy certificates)
ListAPI.Result result = api.list();


/* Or enhancing the PGPainless API */
CertificateAuthorityImpl wot = CertificateAuthorityImpl
    .webOfTrustFromCertificateStore(store, trustRoots, refTime)

// Encryption
EncryptionStream encStream = PGPainless.encryptAndOrSign()
    [...]
    // Add only recipients we can authenticate
    .addAuthenticatableRecipients(userId, isEmail, wot)
    [...]

// Verification
DecryptionStream decStream = [...]
[...]  // finish decryption
MessageMetadata metadata = decStream.getMetadata();
assertTrue(metadata.isAuthenticatablySignedBy(userId, isEmail, wot));

The CLI application pgpainless-wot-cli mimics Sequoia PGP’s neat sq-wot tool, both in argument signature and output format. This has been done in an attempt to enable testing of both applications using the same test suite.

pgpainless-wot-cli can read GnuPGs keyring, can fetch certificates from the Shared OpenPGP Certificate Directory (using pgpainless-cert-d of course :P) and ingest arbitrary .pgp keyring files.

$ ./pgpainless-wot-cli help     
Usage: pgpainless-wot [--certification-network] [--gossip] [--gpg-ownertrust]
                      [--time=TIMESTAMP] [--known-notation=NOTATION NAME]...
                      [-r=FINGERPRINT]... [-a=AMOUNT | --partial | --full |
                      --double] (-k=FILE [-k=FILE]... | --cert-d[=PATH] |
                      --gpg) [COMMAND]
  -a, --trust-amount=AMOUNT
                         The required amount of trust.
      --cert-d[=PATH]    Specify a pgp-cert-d base directory. Leave empty to
                           fallback to the default pgp-cert-d location.
      --certification-network
                         Treat the web of trust as a certification network
                           instead of an authentication network.
      --double           Equivalent to -a 240.
      --full             Equivalent to -a 120.
      --gossip           Find arbitrary paths by treating all certificates as
                           trust-roots with zero trust.
      --gpg              Read trust roots and keyring from GnuPG.
      --gpg-ownertrust   Read trust-roots from GnuPGs ownertrust.
  -k, --keyring=FILE     Specify a keyring file.
      --known-notation=NOTATION NAME
                         Add a notation to the list of known notations.
      --partial          Equivalent to -a 40.
  -r, --trust-root=FINGERPRINT
                         One or more certificates to use as trust-roots.
      --time=TIMESTAMP   Reference time.
Commands:
  authenticate  Authenticate the binding between a certificate and user ID.
  identify      Identify a certificate via its fingerprint by determining the
                  authenticity of its user IDs.
  list          Find all bindings that can be authenticated for all
                  certificates.
  lookup        Lookup authentic certificates by finding bindings for a given
                  user ID.
  path          Verify and lint a path.
  help          Displays help information about the specified command

The README file of the pgpainless-wot-cli module contains instructions on how to build the executable.

Future Improvements

The current implementation still has potential for improvements and optimizations. For one, the Network object containing the result of many costly signature verifications is currently ephemeral and cannot be cached. In the future it would be desirable to change the network parsing code to be agnostic of reference time, including any verifiable signatures as edges of the network, even if those signatures are not yet – or no longer valid. This would allow us to implement some caching logic that could write out the network to disk, ready for future web of trust operations.

That way, the network would only need to be re-created whenever the underlying certificate store is updated with new or changed certificates (which could also be optimized to only update relevant parts of the network). The query algorithm would need to filter out any inactive edges with each query, depending on the queries reference time. This would be far more efficient than re-creating the network with each application start.

But why the Web of Trust?

End-to-end encryption suffers from one major challenge: When sending a message to another user, how do you know that you are using the correct key? How can you prevent an active attacker from handing you fake recipient keys, impersonating your peer? Such a scenario is called Machine-in-the-Middle (MitM) attack.

On the web, the most common countermeasure against MitM attacks are certificate authorities, which certify the TLS certificates of website owners, requiring them to first prove their identity to some extent. Let’s Encrypt for example first verifies, that you control the machine that serves a domain before issuing a certificate for it. Browsers trust Let’s Encrypt, so users can now authenticate your website by validating the certificate chain from the Let’s Encrypt CA key down to your website’s certificate.

The Web-of-Trust follows a similar model, with the difference, that you are your own trust-root and decide, which CA’s you want to trust (which in some sense makes you your own “meta-CA”). The Web-of-Trust is therefore far more decentralized than the fixed set of TLS trust-roots baked into web browsers. You can use your own key to issue trust signatures on keys of contacts that you know are authentic. For example, you might have met Bob in person and he handed you a business card containing his key’s fingerprint. Or you helped a friend set up their encrypted communications and in the process you two exchanged fingerprints manually.

In all these cases, in order to initiate a secure communication channel, you needed to exchange the fingerprint via an out-of-band channel. The real magic only happens, once you take into consideration that your close contacts could also do the same for their close contacts, which makes them CAs too. This way, you could authenticate Charlie via your friend Bob, of whom you know that he is trustworthy, because – come on, it’s Bob! Everybody loves Bob!

An example OpenPGP Web-of-Trust Network diagram.An example for an OpenPGP Web-of-Trust. Simply by delegating trust to the Neutron Mail CA and to Vincenzo, Aaron is able to authenticate a number of certificates.

The Web-of-Trust becomes really useful if you work with people that share the same goal. Your workplace might be one of them, your favorite Linux distribution’s maintainer team, or that non-Profit organization/activist collective that is fighting for a better tomorrow. At work for example, your employer’s IT department might use a local CA (such as an instance of the OpenPGP CA) to help employees to communicate safely. You trust your workplace’s CA, which then introduces you safely to your colleagues’ authentic key material. It even works across business’ boundaries, e.g. if your workplace has a cooperation with ACME and you need to establish a safe communication channel to an ACME employee. In this scenario, your company’s CA might delegate to the ACME CA, allowing you to authenticate ACME employees.

As you can see, the Web-of-Trust becomes more useful the more people are using it. Providing accessible tooling is therefore essential to improve the overall ecosystem. In the future, I hope that OpenPGP clients such as MUAs (e.g. Thunderbird) will embrace the Web-of-Trust.

by vanitasvitae at July 25, 2023 14:02

July 24, 2023

Ignite Realtime Blog

Jabber Browsing Openfire Plugin 1.0.1 released

The Ignite Realtime community is happy to announce a new release of the Jabber Browsing plugin for Openfire.

This is a plugin for the Openfire Real-time Communications server. It provides an implementation for service discovery using the jabber:iq:browse namespace, as specified in XEP-0011: Jabber Browsing. Note that this feature is considered obsolete! The plugin should only be used by people that seek backwards compatibility with very old and very specific IM clients.

This release is a maintenance release. It adds translations and fixes one bug. More details are available in the changelog.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the Jabber Browsing plugin archive page.

If you have any questions, please stop by our community forum or our live groupchat.

For other release announcements and news follow us on Twitter and Mastodon.

1 post - 1 participant

Read full topic

by guus at July 24, 2023 17:34

Agent Information plugin for Openfire release 1.0.1

The Ignite Realtime community is happy to announce a new release of the Agent Information plugin for Openfire.

This plugin implements the XEP-0094 ‘Agent Information’ specification for service discovery using the jabber:iq:agents namespace. Note that this feature is considered obsolete! The plugin should only be used by people that seek backwards compatibility with very old and very specific IM clients.

This release is a maintenance release. It adds translations and fixes one bug. More details are available in the changelog.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the Agent Information plugin archive page.

If you have any questions, please stop by our community forum or our live groupchat.

For other release announcements and news follow us on Twitter and Mastodon.

1 post - 1 participant

Read full topic

by guus at July 24, 2023 09:25

July 20, 2023

Ignite Realtime Blog

Certificate Manager plugin for Openfire release 1.1.1

The Ignite Realtime community is happy to announce a new release of the Certificate Manager plugin for Openfire.

This plugin allows you to automate TLS certificate management tasks. This is particularly helpful when your certificates are short-lived, like the ones issued by Let’s Encrypt.

This release is a maintenance release. It adds translations. More details are available in the changelog.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the Certificate Manager plugin archive page.

If you have any questions, please stop by our community forum or our live groupchat.

For other release announcements and news follow us on Twitter and Mastodon.

1 post - 1 participant

Read full topic

by guus at July 20, 2023 17:31