Planet Jabber

October 10, 2024

Erlang Solutions

Why Open Source Technology is a Smart Choice for Fintech Businesses

Traditionally, the fintech industry relied on proprietary software, with usage and distribution restricted by paid licences. Fintech open-source technologies were distrusted due to security concerns over visible code in complex systems.

But fast-forward to today and financial institutions, including neobanks like Revolut and Monzo, have embraced open source solutions. These banks have built technology stacks on open-source platforms, using new software and innovation to strengthen their competitive edge.

While proprietary software has its role, it faces challenges exemplified by Oracle/Java’s subscription model changes, which have led to significant cost hikes. In contrast, open source Delivers flexibility, scalability, and more control, making it a great choice for fintechs aiming to remain adaptable.

Curious why open source is the smart choice for fintech? Let’s look into how this shift can help future-proof operations, drive innovation, and enhance customer-centric services.

The impact of Oracle Java’s pricing changes

Before we understand why open source is a smart choice for fintech, let’s look at a recent example that highlights the risks of relying on proprietary software—Oracle Java’s subscription model changes.

A change to subscription

Java, known as the “language of business,” has been the top choice for developers and 90% of Fortune 500 companies for over 28 years, due to its stability, performance, and strong Oracle Java community.

In January 2023, Oracle quietly shifted its Java SE subscription model to an employee-based system, charging businesses based on total headcount, not just the number of users. This change alarmed many subscribers and resulted in steep increases in licensing fees. According to Gartner, these changes made operations two to five times more expensive for most organisations.

Fintech open source Java SE universal products

Oracle Java SE Universal Subscription Global Price List (by volume)

Impact on Oracle Java SE user base

By January 2024, many Oracle Java SE subscribers had switched to OpenJDK, the open-source version of Java. Online sentiment towards Oracle has been unfavourable, with many users expressing dissatisfaction in forums. Those who stuck with Oracle are now facing hefty subscription fee increases with little added benefit.

Lessons from Oracle Java SE

For fintech companies, Oracle Java’s pricing changes have highlighted the risks of proprietary software. In particular, there are unexpected cost hikes, less flexibility, and disruptions to critical infrastructure. Open source solutions, on the other hand, give fintech firms more control, reduce vendor lock-in, and allow them to adapt to future changes while keeping costs in check.

The advantages of open source technologies for Fintech

Open source software is gaining attention in financial institutions, thanks to the rise of digital financial services and fintech advancements. 

It is expected to grow by 24% by 2025 and companies that embrace open-source benefit from enhanced security, support for cryptocurrency trading, and a boost to fintech innovation. 

Cost-effectiveness

The cost advantages of open-source software have been a major draw for companies looking to shift from proprietary systems. For fintech companies, open-source reduces operational expenses compared to the unpredictable, high costs of proprietary solutions like Oracle Java SE.

Open source software is often free, allowing fintech startups and established firms to lower development costs and redirect funds to key areas such as compliance, security, and user experience. It also avoids fees like:

  • Multi-user licences
  • Administrative charges
  • Ongoing annual software support charges

These savings help reduce operating expenses while enabling investment in valuable services like user training, ongoing support, and customised development, driving growth and efficiency.

A solution to big tech monopolies

Monopolies in tech, particularly in fintech, are increasing. As reported by CB Insights, about 80% of global payment transactions are controlled by just a few major players. These monopolies stifle innovation and drive up costs.

Open-source software decentralises development, preventing any single entity from holding total control. It offers fintech companies an alternative to proprietary systems, reducing reliance on monopolistic players and fostering healthy competition. Open-source models promote transparency, innovation, and lower costs, helping create more inclusive and competitive systems.

Transparent and secure solutions

Security concerns have been a major roadblock that causes companies and startups to hesitate in adopting open-source software.

A common myth about open source is that its public code makes it insecure. But, open-source benefits from transparency, as it allows for continuous public scrutiny. Security flaws are discovered and addressed quickly by the community, unlike proprietary software, where vulnerabilities may remain hidden.

An example is Vocalink, which powers real-time global payment systems. Vocalink uses Erlang, an open-source language designed for high-availability systems, ensuring secure, scalable payment handling. The transparency of open source allows businesses to audit security, ensure compliance, and quickly implement fixes, leading to more secure fintech infrastructure.

Ongoing community support

Beyond security, open source benefits from vibrant communities of developers and users who share knowledge and collaborate to enhance software. This fosters innovation and accelerates development, allowing for faster adaptation to trends or market demands.

Since the code is open, fintech firms can build custom solutions, which can be contributed back to the community for others to use. The rapid pace of innovation within these communities helps keep the software relevant and adaptable.

Interoperability

Interoperability is a game-changer for open-source solutions in financial institutions, allowing for the seamless integration of diverse applications and systems- essential for financial services with complex tech stacks. 

By adopting open standards (publicly accessible guidelines ensuring compatibility), financial institutions can eliminate costly manual integrations and enable plug-and-play functionality. This enhances agility, allowing institutions to adopt the best applications without being tied to a single vendor.

A notable example is NatWest’s Backplane, an open-source interoperability solution built on FDC3 standards. Backplane enables customers and fintechs to integrate their desktop apps with various banking and fintech applications, enhancing the financial desktop experience. This approach fosters innovation, saves time and resources, and creates a more flexible, customer-centric ecosystem.

Future-proofing for longevity

Open-source software has long-term viability. Since the source code is accessible, even if the original team disbands, other organisations, developers or the community at large can maintain and update the software. This ensures the software remains usable and up-to-date, preventing reliance on unsupported tools.

Open Source powering Fintech trends

According to the latest study by McKinsey and Company, Artificial Intelligence (AI), machine learning (ML), blockchain technology, and hyper-personalisation will be among some of the key technologies driving financial services in the next decade. 

Open-source platforms will play a key role in supporting and accelerating these developments, making them more accessible and innovative.

AI and fintech innovation

  • Cost-effective AI/ML: Open-source AI frameworks like TensorFlow, PyTorch, and Scikit-learn enable startups to prototype and deploy AI models affordably, with the flexibility to scale as they grow. This democratisation of AI allows smaller players to compete with larger firms.
  • Fraud detection and personalisation: AI-powered fraud detection and personalised services are central to fintech innovation. Open-source AI libraries help companies like Stripe and PayPal detect fraudulent transactions by analysing patterns, while AI enables dynamic pricing and custom loan offers based on user behaviour.
  • Efficient operations: AI streamlines back-office tasks through automation, knowledge graphs, and natural language processing (NLP), improving fraud detection and overall operational efficiency.
  • Privacy-aware AI: Emerging technologies like federated learning and encryption tools help keep sensitive data secure, for rapid AI innovation while ensuring privacy and compliance.

Blockchain and fintech 

Open-source blockchain platforms allow fintech startups to innovate without the hefty cost of proprietary systems:

  • Open-source blockchain platforms: Platforms like Ethereum, Bitcoin Core, and Hyperledger are decentralising finance, providing transparency, reducing reliance on intermediaries, and reshaping financial services.
  • Decentralised finance (DeFi):  DeFi is projected to see an impressive rise, with P2P lending growing from $43.16 billion in 2018 to an estimated $567.3 billion by 2026. Platforms like Uniswap and Aave, built on Ethereum, are pioneering decentralised lending and asset management, offering an alternative to traditional banking. By 2023, Ethereum alone locked $23 billion in DeFi assets, proving its growing influence in the fintech space. Enterprise blockchain solutions: Open source frameworks like Hyperledger Fabric and Corda are enabling enterprises to develop private, permissioned blockchain solutions, enhancing security and scalability across industries, including finance.

Cost-effective innovation: Startups leveraging open-source blockchain technologies can build innovative financial services while keeping costs low, helping them compete effectively with traditional financial institutions.

Hyper-personalisation

Hyper-personalisation is another key trend in fintech, with AI and open-source technologies enabling companies to create highly tailored financial products. This shift moves away from the traditional “one-size-fits-all” model, helping fintechs solve niche customer challenges and deliver more precise services.

Consumer demand for personalisation

A Salesforce survey found that 65% of consumers expect businesses to personalise their services, while 86% are willing to share data to receive more customised experiences.

Salesforce survey fintech open source businesses

source- State of the connected customer

The expectation for personalised services is shaping how financial institutions approach customer engagement and product development.

Real-world examples of open-source fintech

Companies like Robinhood and Chime leverage open-source tools to analyse user data and create personalised financial recommendations. These platforms use technologies like Apache Kafka and Apache Spark to process real-time data, improving the accuracy and relevance of their personalised offerings-from customised investment options to tailored loan products.

Implementing hyper-personalisation lets fintech companies strengthen customer relationships, boost retention, and increase deposits. By leveraging real-time, data-driven technologies, they can offer highly relevant products that foster customer loyalty and maximise value throughout the customer lifecycle. With the scalability and flexibility of open-source solutions, companies can provide precise, cost-effective personalised services, positioning themselves for success in a competitive market.

Erlang and Elixir: Open Source solutions for fintech applications

Released as open-source in 1998, Erlang has become essential for fintech companies that need scalable, high-concurrency, and fault-tolerant systems. Its open-source nature, combined with the capabilities of Elixir (which builds on Erlang’s robust architecture), enables fintech firms to innovate without relying on proprietary software, providing the flexibility to develop custom and efficient solutions.

Both Erlang and Elixir’s architecture are designed to ensure potentially zero downtime, making them well-suited for real-time financial transactions.

Why Erlang and Elixir are ideal for Fintech:

  • Reliability: Erlang’s and Elixir’s design ensures that applications continue to function smoothly even during hardware or network failures, crucial for financial services that operate 24/7, guaranteeing uninterrupted service. Elixir inherits Erlang’s reliability while providing a more modern syntax for development.
  • Scalability: Erlang and Elixir can handle thousands of concurrent processes, making them perfect for fintech companies looking to scale quickly, especially when dealing with growing data volumes and transactions. Elixir enhances Erlang’s scalability with modern tooling and enhanced performance for certain types of workloads.
  • Fault tolerance: Built-in error detection and recovery features ensure that unexpected failures are managed with minimal disruption. This is vital for fintech applications, where downtime can lead to significant financial losses. Erlang’s auto restoration philosophy and Elixir’s features enable 100% availability and no transaction is lost.
  • Concurrency & distribution: Both Erlang and Elixir excel at managing multiple concurrent processes across distributed systems. This makes them ideal for fintechs with global operations that require real-time data processing across various locations.

Open-source fintech use cases

Several leading fintech companies have already used Erlang to build scalable, reliable systems that support their complex operations and real-time transactions.

  • Klarna: This major European fintech relies on Erlang to manage real-time e-commerce payment solutions, where scalability and reliability are critical for managing millions of transactions daily.
  • Goldman Sachs: Erlang is utilised in Goldman Sachs’ high-frequency trading platform, allowing for ultra-low latency and real-time processing essential for responding to market conditions in microseconds.
  • Kivra: Erlang/ Elixir supports Kivra’s backend services, managing secure digital communications for millions of users, and ensuring constant uptime and data security.

Erlang and Elixir -supporting future fintech trends

The features of Erlang and Elixir align well with emerging fintech trends:

  • DeFi and Decentralised Applications (dApps): With the growth of decentralised finance (DeFi), Erlang’s and Elixir’s fault tolerance and real-time scalability make them ideal for building dApps that require secure, distributed networks capable of handling large transaction volumes without failure.
  • Hyperpersonalisation: As demand for hyperpersonalised financial services grows, Erlang and Elixir’s ability to process vast amounts of real-time data across users simultaneously makes them vital for delivering tailored, data-driven experiences.
  • Open banking: Erlang and Elixir’s concurrency support enables fintechs to build seamless, scalable platforms in the open banking era, where various financial systems must interact across multiple applications and services to provide integrated solutions.

Erlang and Elixir can handle thousands of real-time transactions with zero downtime making them well-suited for trends like DeFi, hyperpersonalisation, and open banking. Their flexibility and active developer community ensure that fintechs can innovate without being locked into costly proprietary software.

To conclude

Fintech businesses are navigating an increasingly complex and competitive landscape where traditional solutions no longer provide a competitive edge. If you’re a company still reliant on proprietary software, ask yourself: Is your system equipped to expect the unexpected? Can your existing solutions keep up with market demands? 

Open-source technologies offer a solution to these challenges. Fintech firms can reduce costs, improve security, and, most importantly, innovate and scale according to their needs. Whether by reducing vendor lock-ins, tapping into a vibrant developer community, or leveraging customisation, open-source software is set to transform the fintech experience, providing the tools necessary to stay ahead in a digital-first world. If you’re interested in exploring how open-source solutions like Erlang or Elixir can help future-proof your fintech systems, contact the Erlang Solutions team.

The post Why Open Source Technology is a Smart Choice for Fintech Businesses appeared first on Erlang Solutions.

by Erlang Solutions Team at October 10, 2024 09:40

XMPP Interop Testing

Incoming: Improvements!

A new boost in the project’s budget will allow us to approximately double the test coverage of our project (and add a couple of nice features)!

Much of the XMPP Interop Testing project was made possible as the work was funded through the NGI0 Core Fund. This is a fund established by NLnet with financial support from the European Commission’s Next Generation Internet programme.

It is quite remarkable how far the effects of funding reach: it allowed us to work out our plans to take various, pre-existing bits and bobs, and quickly and efficiently turn a small tool used for internal testing to a proper testing framework for any XMPP server implementation to be able to use. That snowballed in bug fixes for server implementations, and improvements to specifications used by many. A relatively small fund thus improved the quality of open standard-based communication used in one shape or another by countless people, daily!

We are so happy and grateful to NLnet for boosting our project’s grant! With the additional work, we will add the following improvements:

  • Have better test coverage by writing more tests;
  • Improve feedback when tests fail or do not run at all;
  • Add a new test account provisioning option;
  • Improve test selection configuration;
  • Automate recurring maintenance tasks;
  • Add support for other build systems.

This all will help us improve our framework, it will help our users to improve their products, and will allow new projects to more easily deploy our open and free solutions into their CI pipelines!

You can expect a lot of these improvements to become available to you, soon!

by Guus der Kinderen at October 10, 2024 09:10

October 08, 2024

ProcessOne

WebPush support on your fluux.io instance

We’re excited to announce the latest enhancement to Fluux.io services – the integration of WebPush support. This significant update extends our services beyond
FCM/APNs, enabling push notifications for XMPP across various platforms. Now, our push notification capabilities are not limited to native mobile clients on iOS, MacOS and Android, but also extend to web applications on browsers like Safari, Chrome, Firefox and more. This includes support for mobile versions of Safari and Chrome. This advancement broadens the scope for XMPP clients, offering new possibilities and a more extensive reach. Please also note that the Webpush support is also available to our customers using our on-premise ejabberd Business Edition.

To enable it, go to your services in your fluux.io console, select “Push Notifications” and then “+ WebPush

You will be prompted for an appid (typically the domain you want to enable WebPush on). For example here fluux.io. It will generate a VAPID key that will be used by ejabberd to sign the push notification sent to the user’s browser.

Checking “View Config” will allow you to see the VAPID public key. It will be required to let the browser subscribe to notifications. Your website also needs to register a service worker that will be responsible for displaying the notification when a push is received.

As an example, we provide a small ejabberd client to test the whole workflow. It is pre-populated with a test user and associated appid/key.

The first step is to authenticate an XMPP user through your service. Then click “Enable Push“.

It will ask authorization to enable push notification and create a subscription to FCM/Apple/Mozilla services. Then the XMPP client (using strophe.js) will send a stanza to enable offline messaging. ejabberd will now send a notification to this entry point, which will send a push to the user’s browser.

To trigger it, disconnect/close all opened XMPP sessions of your test user and send him a message from another test user. Your browser will display a notification from your website with the message snippet and its author. You can then check triggered notification on your Push logs console

Alternatively, you can check the test user and its associated devices:

and send a test notification:

The post WebPush support on your fluux.io instance first appeared on ProcessOne.

by Sébastien Luquet at October 08, 2024 12:56

October 04, 2024

The XMPP Standards Foundation

The XMPP Newsletter September 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of September 2024.

XSF Announcements

If you are interested in joining the XMPP Standards Foundation as a member, please apply until November 24th, 2024!.

The XMPP Standards Foundation is also calling for XSF Board 2024 and XSF Council 2024. Be involved in the XMPP Standards Foundation organisation decisions as well as on our specifications we publish. If you are interested in running for Board or Council, please add a wiki page about your candidacy to one or both of the following sections until November 3rd, 2024, 00:00 UTC. Note: XMPP Council members must be elected members of the XSF; however, there is no such restriction for the Board of Directors.

XMPP and Google Summer of Code 2024

The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and have kicked-off with coding:

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • Berlin XMPP Meetup (DE / EN): monthly meeting of XMPP enthusiasts in Berlin, every 2nd Wednesday of the month at 6pm local time
  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

Videos

  • Detailed and comprehensive introduction to Rivista XJP: the XMPP PubSub Content Management System.

XMPP Articles

XMPP Software News

XMPP Clients and Applications

  • Cheogram has released version 2.15.3-4 for Android.
  • Conversations has released version 2.16.7 for Android.
  • Psi+ 1.5.2041 installer has been released.
  • Gajim 1.9.4 and 1.9.5 have been released. These releases come with integrated support for the XMPP Providers project. Furthermore, there is now support for “Hats” (XEP-0317), which allow you to assign roles to group chat participants, i.e. “Support”, “Expert” or really anything you like to assign. Last but not least, Gajim’s Microsoft Store release has been improved in many ways. You can check the changelog for more details.
  • Movim 0.28 has been released. This new version (code named “Tempel”) brings a “Freshly redesigned Search panel, improved account gateways and administration features, databases fixes and a new call flow and conference lobby” among many other fixes and improvements.
Movim 0.28 (Tempel) Introducing the new call flow and conference lobby

Movim 0.28 (Tempel) Introducing the new call flow and conference lobby

XMPP Servers

XMPP Libraries & Tools

Ignite Realtime community:

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

New

  • Version 0.1.0 of XEP-0493 (OAuth Client Login)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0494 (Client Access Management)
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 2.13.2 of XEP-0004 (Data Forms)
    • Add section on empty and absent values. (gk)
  • Version 1.35.1 of XEP-0045 (Multi-User Chat)
    • Add explicit error definition when non-owners attempt to use owner-specific functionality. (gk)
  • Version 1.3.1 of XEP-0133 (Service Administration)
    • Fixed typo in example for Get User Last Login Time (dc)
  • Version 0.4.2 of XEP-0264 (Jingle Content Thumbnails)
    • Restrict ‘width’ and ‘height’ to the 0..65535 range, instead of being unbounded integers. This is in accordance to XEP-0084 and XEP-0221 for instance. (egp)
  • Version 0.2.0 of XEP-0272 (Multiparty Jingle (Muji))
    • Send Jingle IQs to real JID
    • Define how to use with XEP-0482
    • Adjust namespace (lmw)
  • Version 1.1.2 of XEP-0313 (Message Archive Management)
    • Fix JID and affiliation of the first two witches in the MUC example.
    • Fix duplicated ‘id’ in MUC example.
    • Fix indentation in examples. (egp)
  • Version 0.3.1 of XEP-0474 (SASL SCRAM Downgrade Protection)
    • Fix typos
    • Adapt attack-model section to new simplified protocol (tm)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • No Last Call this month.

Stable

  • No XEP moved to Stable this month.

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Schimon Zachary, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: xmpp.org
    • Translators: Gonzalo Raúl Nemmi
  • German: xmpp.org
    • Translators: Millesimus

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

Unsubscribe from the XMPP Newsletter

To unsubscribe from this list, please log in first. If you have not previously logged in, you may need to set up an account with the appropriate email address.

License

This newsletter is published under CC BY-SA license.

October 04, 2024 00:00

October 03, 2024

Erlang Solutions

Why do systems fail? Tandem NonStop system and fault tolerance

If you’re an Elixir, Gleam, or Erlang developer, you’ve probably heard about the capabilities of the BEAM virtual machine, such as concurrency, distribution, and fault tolerance. Fault tolerance was one of the biggest concerns of Tandem Computers. They created their Tandem Non-Stop architecture for high availability in their systems, which included ATMs and mainframes.

In this post, I’ll be sharing the fundamentals of the NonStop architecture design with you. Their approach to achieving high availability in the presence of failures is similar to some implementations in the Erlang Virtual Machine, as both rely on concepts of processes and modularity.

Systems with High Availability

Why do systems fail? This question should probably be asked more often, considering all the factors it involves. It was central to the NonStop architecture because achieving high availability depends on understanding system failures. 

For tandem systems, any system has critical components that could potentially cause failures. How often do you ask yourself how long can your system operate before a failure? There is a metric known as MTBF (mean time between failures), which is calculated by dividing the total operating hours of the system by the number of failures. The result represents the hours of uninterrupted operation.

Many factors can affect the MTBF, including administration, configuration, maintenance, power outages, hardware failures, and more. So, how can you survive these eventualities to achieve at least virtual high availability in your systems?

Tandem NonStop critical components

High availability in hardware has taught us important insights about continuous operation. Some hardware implementations rely on decomposing the system into modules, allowing for modularity to contain failures and maintain operation through backup modules instead of breaking the whole system and needing to restart it. The main concept, from this point of view, is to use modules as units of failure and replacement.

Tandem NonStop system in modules

High Availability for Software Systems

But what about the software’s high availability? Just as with hardware, we can find important lessons from operative system designers who decompose systems into modules as units of service. This approach provides a mechanism for having a unit of protection and fault containment. 

To achieve fault tolerance in software, it’s important to address similar insights from the NonStop design:

  • Modularity through processes and messages.
  • Fault containment.
  • Process pairs for fault tolerance.
  • Data integrity.

Can you recognise some similarities so far?

The NonStop architecture essentially relies on these concepts. The key to high availability, as I mentioned before, is modularity as a unit of service failure and protection.

A process should have a fail-fast mechanism, meaning it should be able to detect a failure during its operation, send a failure signal and then stop its operation. In this way, a system can achieve fault detection through fault containment and by sharing no state. 

Tandem NonStop primary backup

Another important consideration for your system is how long it takes to recover from a failure. Jim Gray, software designer and researcher at Tandem Computers, in his paper ”Why computers stop and what can be done about it?” proposed a model of failure affected by two kinds of bugs: Bohrbugs, which cause critical failures during operation, and Heisenbugs, which are more soft and can persist in the system for years. 

Implementing Processes-Pairs Strategies

The previous categorisation helps us to understand better strategies for implementing processes-pairs design, based on a primary process and a backup process:

  • Lockstep: Primary and backup processes execute the same task, so if the primary fails, the backup continues the execution. This is good for hardware failures, but in the presence of Heisenbugs, both processes will remain the failure. 
  • State checkpointing: A requestor entity is connected to a processes-pair. When the primary process stops operation, the requestor switches to the backup process. You need to design the requestor logic. 
  • Automatic checkpointing: Similar to the previous, but using the kernel to manage the checkpointing.
  • Delta checkpointing: Similar to state checkpointing but using logical rather than physical updates.
  • Persistence: When the primary process fails, the backup process starts its operation without a state. The system must implement a way to synchronise all the modules and avoid corrupt interaction.
Tandem NonStop processes pairs

All of these insights are drawn from Jim Gray’s paper, written in 1985 and referenced in Joe Armstrong’s 2003 thesis, “Making Reliable Distributed Systems in the presence of software errors”. Joe emphasised the importance of the Tandem NonStop system design as an inspiration for the OTP design principles. 

Elixir and High Availability

So if you’re a software developer learning Elixir, you’ll probably be amazed by all the capabilities and great tooling available to build software systems. By leveraging frameworks like Phoenix and toolkits such as Ecto, you can build full-stack systems in Elixir. However, to fully harness the power of the Erlang virtual machine (BEAM) you must understand processes. 

Just as the Tandem computer system relied on transactions, fault containment and a fail-fast mechanism, Erlang achieves high availability through processes. Both systems consider it important to modularise systems into units of service and failure: processes. 

About the process

A process is the basic unit of abstraction in Erlang, a crucial concept because the Erlang virtual machine (BEAM) operates around this. Elixir and Gleam share the same virtual machine, which is why this concept is important for the entire ecosystem. 

A process is:

  • A strongly isolated entity.
  • Creation and destruction is a lightweight operation.
  • Message passing is the only way to interact with processes.
  • Share no state.
  • Do what they are supposed to do or fail.

Just remember, these are the fundamentals of Erlang, which is considered a message-oriented language, and its virtual machine (BEAM), on which Elixir runs.  

Tandem NonStop BEAM

If you want to read more about processes in Elixir I recommend reading this article I wrote: Understanding Processes for Elixir Developers.

I consider it important to read papers like Jim Gray’s article because they teach us the history behind implementations that attempt to solve problems. I find it interesting to read and share these insights with the community because it’s crucial to understand the context behind the tools we use. Recognising that implementations exist for a reason and have stories behind them is essential.

 You can find many similarities between Tandem and Erlang design principles:

  •  Both aim to achieve high availability.
  •  Isolation of operations is extremely important to contain failure.
  •  Processes that share no state are crucial for building modular systems.
  •  Process interactions are key to maintaining operation in the presence of errors. While Tandem computers implemented process-pairs design, Erlang implemented OTP patterns.

To conclude

Take some time to read about the Tandem computer design. It’s interesting because these features share significant similarities with OTP design principles for achieving high availability. Failure is something we need to deal with in any kind of system, and it’s important to be aware of the reasons and know what you can do to manage it and continue your operation. This is crucial for any software developer, but if you’re an Elixir developer, you’ll probably dive deeper into how processes work and how to start designing components with them and OTP.

Thanks for reading about the Tandem NonStop system. If you like this kind of content, I’d appreciate it if you shared it with your community or teammates. You can visit this public repository on GitHub where I’m adding my graphic recordings and insights related to the Erlang ecosystem or contact the Erlang Solutions team to chat more about Erlang and Elixir.

Tandem NonStop Joe Armstrong

Illustrations by Visual Partner-Ship @visual_partner 

Jaguares, ESL Americas Office 

@carlogilmar

The post Why do systems fail? Tandem NonStop system and fault tolerance appeared first on Erlang Solutions.

by Carlo Gilmar at October 03, 2024 12:02

October 02, 2024

Ignite Realtime Blog

XMPP: The Protocol for Open, Extensible Instant Messaging

Introduction to XMPP

XMPP, the Extensible Messaging and Presence Protocol, is an Instant Messaging (IM) standard of the Internet Engineering Task Force (IETF) - the same organization that standardized Email (POP/IMAP/SMTP) and the World Wide Web (HTTP) protocols. XMPP evolved out of the early XML streaming technology developed by the XMPP Open Source community and is now the leading protocol for exchanging real-time structured data. XMPP can be used to stream virtually any XML data between individuals or applications, making it a perfect choice for applications such as IM.

A Brief History

IM has a long history, existing in various forms on computers as soon as they were attached to networks. Most IM systems were designed in isolation using closed networks and/or proprietary protocols, meaning each system can only exchange messages with users on the same IM network. Users on different IM networks often can’t send or receive messages, or do so with drastically reduced features because the messages must be transported through “gateways” that use a least common denominator approach to message translation.

The problem of isolated, proprietary networks in IM systems today is similar to email systems in the early days of computer networks. Fortunately for email, the IETF created early standards defining the protocols and data formats that should be used to exchange email. Email software vendors rapidly switched to the IETF standards to provide universal exchange of email among all email users on the Internet.

In 2004 the IETF published RFC 3920 and 3921 (the “Core” and “Instant Messaging and Presence” specifications for instant messaging) officially adding XMPP, mostly known as Jabber at the time, to the list of Internet standards. A year later, Google introduced Google Talk, a service that uses XMPP as its underlying protocol.

Google’s endorsement of the XMPP protocol greatly increased the visibility and popularity of XMPP and helped pave the way for XMPP to become the Internet IM standard. Over the years, more and more XMPP-based solutions followed: from Whatsapp, Jitsi, Zoom and Grinder in the IM-sphere, Google Cloud Print, Firebase Cloud Messaging and Logitec’s Harmony Hub in the IoT-realm, to Nintendo Switch, Fortnite and League of Legends in the world of gaming.

XMPP: Open, Extensible, XML Instant Messaging

The XMPP protocol benefits from three primary features that appeal to administrators, end users and developers: an IETF open standard, XML data format, and simple extensions to the core protocol. These benefits combine to position XMPP as the most compelling IM protocol available for businesses, consumers, and organizations of any size.

Open Standard Benefits

The fact that XMPP is an open standard has led to its adoption by numerous software projects that cover a broad range of environments and users. This has helped improve the overall design of the protocol, as well as ensured a “best of breed” market of client applications and libraries that work with all XMPP servers. The vibrant XMPP software marketplace contains 90+ compatible clients that operate on all standard desktop systems and mobile devices, from mobile phones to tablets.

Wide adoption has provided real-world proof that XMPP-based software from different vendors, deployed by both large and small organizations, can work together seamlessly. For example, XMPP users logged into their personal home server and an employee logged into a corporate IM server can chat, see each other’s presence on their contact lists, and participate in chat rooms hosted on an Openfire XMPP server running at a university.

XML Data

XML is one of the most popular, robust data exchange formats in use today and has become a standard part of most software systems. As a well-matured protocol, XMPP uses the XML data format to transport data over standard TCP/IP sockets and websockets, making the protocol and its data easy to use and understand. Any developer familiar with XML can immediately work with XMPP as no special data format or other proprietary knowledge is needed. Existing tools for creating, reading, editing, and validating XML data can all be used with XMPP without significant modification. The XML foundation of XMPP greatly simplifies integration with existing environments and eases the movement of data to and from the XMPP network.

Extending XMPP

The extensible nature of XML provides much of the extension support built into XMPP. Through the use of XML namespaces, the XMPP protocol can be easily used to transport custom data in addition to standard IM messages and presence information. Software developers and companies interested in the real-time exchange of data are using XMPP as an alternative to custom data transport systems.

The XMPP community publishes standard extensions called XMPP Enhancement Proposals (XEPs) through the XMPP Software Foundation (XSF). The XSF’s volunteer-driven process provides a way for companies creating innovative extensions and enhancements to the XMPP protocol to work together to create standard improvements that all XMPP users benefit from. There are well over 400 XEPs today covering a wide range of functionality, including security enhancements, user experience improvements and VoIP and video conferencing. XEPs allow the XMPP protocol to rapidly evolve and improve in an open, standards-based way.

XMPP Networks Explained

An XMPP network is composed of all the XMPP clients and servers that can reach each other on a single computer network. The biggest XMPP network is available on the Internet and connects public XMPP servers. However, people are free to create private XMPP networks within a single company’s internal LAN, on secure corporate virtual private networks, or even within a private network running in a person’s home. Within each XMPP network, each user is assigned a unique XMPP address.

Addresses - Just Like Email

XMPP addresses look exactly the same as email addresses, containing a user name and a domain name. For example, sales@acme.com is a valid XMPP address for a user account named “sales” in the acme.com domain. It is common for an organization to issue the same XMPP address and email address to a user. Within the XMPP server, user accounts are frequently authenticated against the same common user account system used by the email system.

XMPP addresses are generated and issued in the same way that email addresses are. Each XMPP domain is managed by the domain owner, and the XMPP server for that domain is used to create, edit, and delete user accounts. For example, the acme.com server is used to manage user accounts that end with @acme.com. If a company runs the acme.com server, the company sets its own policies and uses its own software to manage user accounts. If the domain is a hosted account on an Internet Service Provider (ISP) the ISP usually provides a web control panel to easily manage XMPP user accounts in the same way that email accounts are managed. The flexibility and control that the XMPP network provides is a major benefit of XMPP IM systems over proprietary public IM systems like Whatsapp, Telegram and Signal, where all user accounts are hosted by a third party.

Server Federation

XMPP is designed using a federated, client-server architecture. Server federation is a common means of spreading resource usage and control between Internet services. In a federated architecture, each server is responsible for controlling all activities within its own domain and works cooperatively with servers in other domains as equal peers.

In XMPP, each client connects to the server that controls its XMPP domain. This server is responsible for authentication, message delivery and maintaining presence information for all users within the domain. If a user needs to send an instant message to a user outside of their own domain, their server contacts the external server that controls the “foreign” XMPP domain and forwards the message to that XMPP server. The foreign XMPP server takes care of delivering the message to the intended recipient within its domain. This same server-to-server model applies to all cross-domain data exchanges, including presence information.

XMPP server federation is modeled after the design of Internet email, which has shown that the design scales to include the entire Internet and provides the necessary flexibility and control to meet the needs of individual domains. Each XMPP domain can define the level of security, quality of service, and manageability that make sense for their organization.

Conclusion

XMPP is open, flexible and extensible, making it the protocol of choice for real-time communications over the Internet. It enables the reliable transport of any structured XML data between individuals or applications. Numerous mission-critical business applications use XMPP, including chat and IM, network management and financial trading. With inherent security features and support for cross-domain server federation, XMPP is more than able to meet the needs of the most demanding environments.

2 posts - 2 participants

Read full topic

by guus at October 02, 2024 09:56

ProcessOne

Matrix and XMPP: Thoughts on Improving Messaging Protocols – Part 1

For over two decades, ProcessOne has been developing large-scale messaging platforms, powering some of the largest services in the world. Our mission is to build the best messaging back-ends imaginable–an exciting yet complex challenge.

We began with XMPP (eXtensible Messaging and Presence Protocol), but the need for interoperability and support for a variety of use cases led us to implement additional protocols. Our stack now supports:

  • XMPP (eXtensible Messaging and Presence Protocol): A robust, highly scalable, and flexible protocol for real-time messaging.
  • MQTT (Message Queuing Telemetry Transport): The standard for IoT messaging, ideal for lightweight communication between devices.
  • SIP (Session Initiation Protocol): A widely used standard for voice-over-IP (VoIP) communications.
  • Matrix: A decentralized protocol for secure, real-time communication.

A Distributed Protocol That Replicates Data Across Federated Servers

This brings me to the topic of Matrix. Matrix is designed not just to be federated but also distributed. While it uses the term “decentralized,” I find this slightly misleading. A federated protocol is inherently decentralized, as it allows users across different domains to communicate–think email, XMPP, and Matrix itself. What truly sets Matrix apart from XMPP is its data distribution model.

Matrix is distributed because it aims to ensure that all participating nodes in a conversation have a copy of that conversation, typically in end-to-end encrypted form. This ensures high availability: if the primary node hosting a conversation becomes unavailable, the conversation can continue on another node.

In Matrix, a conversation is represented as a graph, a replicated document containing all events related to a group discussion. You can think of each conversation as a mini-blockchain, except that instead of forming a chain, the events create a graph.

Resource Penalty: Computing and Storage

As with any design decision, there are trade-offs. In the case of Matrix, this comes with a performance penalty. Since conversations are replicated across nodes, the protocol performs merge operations to ensure consistency between the replicated data. The higher the traffic, the greater the cost of these merge operations, which adds CPU load on both the Matrix node and its database. Additionally, there is a significant cost in terms of storage.

If the Matrix network were to scale massively, with many nodes and conversations, it would encounter the same growth challenges as blockchain protocols. Each node must store a copy of the conversation, and the amount of replication depends on the number of conversations and nodes globally. As these numbers grow, so does the replication factor.

Comparison with XMPP

XMPP, on the other hand, is event-based rather than document-based. It processes and distributes events in the order they arrive without attempting to merge conversation histories. This simpler approach avoids the replication of group chat data across federated nodes, but it comes with some limitations.

Here’s how XMPP mitigates these limitations:

  • One-on-One Conversations: Each user’s messages are archived on their server, keeping the replication factor under control (usually limited to two copies).
  • Group Chats: If a chatroom goes down, the conversation becomes unavailable for both local and remote users. However, XMPP has strategies to reduce the need for data replication. Several servers implement clustering, making it possible to upgrade the service node by node. If one node is taken down (for maintenance, for instance), another node can take over the management of the chatroom.
  • Hot Code Upgrades: Some servers, like ejabberd, allow hot code upgrades, which means minor updates can be applied without shutting down the node, minimizing downtime and the need for data replication.
  • Message Archiving for MUC Rooms: Some servers offer message archiving for multi-user chat (MUC) rooms, but also allows users to store on their local server recent chat history in select MUC for future reference.
  • Cluster Replication (ejabberd Business Edition): Chatrooms can be replicated within a cluster, ensuring they remain available even if a node crashes.

Thanks to the typically high uptime of XMPP servers, especially for clustered services, intermittent availability of servers in the network hasn’t posed a significant issue, so large-scale data replication hasn’t been a necessity.

What’s Next for XMPP?

This comparison suggests potential improvements for XMPP. Could XMPP benefit from an optional feature to address the centralized nature of chatrooms? Possibly. What if there were a caching and resynchronization protocol for multi-user chatrooms across different servers? This could enhance the robustness of the federation without the storage burden of full content replication, offering the best of both worlds.

What’s Next for Matrix?

Matrix, by design, comes with trade-offs. One of its key goals is to resist censorship, which is vital in certain situations and countries. That’s why we believe it is worth trying to improve the Matrix protocol to address those use cases. There’s still room to optimize the protocol, specifically by reducing the cost of running a distributed data store. We plan to propose and implement improvements on how merge operations work to make Matrix more efficient.

I’ll share our proposals in the next article.

As always, this is an open discussion, and I’d be happy to dive deeper into these topics if you’re interested.

The post Matrix and XMPP: Thoughts on Improving Messaging Protocols – Part 1 first appeared on ProcessOne.

by Mickaël Rémond at October 02, 2024 09:52

September 30, 2024

Gajim

Gajim 1.9.5

This release comes with many improvements for Gajim’s Microsoft Store version. Translations are now available for all distributions again. Thank you for all your contributions!

What’s New

Gajim now detects if you installed it from the Microsoft Store. This allows Gajim to delegate updates to the Store rather than handling updates by itself. Detecting the install method also allowed us to apply a fix which prevented native notifications to work in Windows. Last but not least, viewing received images and download folders should now work properly on Windows.

What else happened

  • Translations are available for all distributions again
  • Typing indicator has been moved above the chat input
  • Debug console received a proper search

Have a look at the changelog for a complete list.

Gajim

As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab.

Gajim is free software developed by volunteers.
If you like to support Gajim, please consider making a donation.

Donate via Liberapay:

September 30, 2024 00:00

September 28, 2024

JMP

CertWatch

As you may have already seen, on October 21st, it was reported that a long-running, successful MITM (Machine-In-The-Middle) attack against jabber.ru had been detected. The nature of this attack was not specific to the XMPP protocol in any way, but it was of special interest to us as members of the XMPP community. This kind of attack relies on being able to present a TLS certificate which anyone trying to connect will accept as valid. In this case, it was done by getting a valid certificate from Let’s Encrypt.

When it comes to mitigation strategies for client-to-server connections, luckily there is already an excellent option called channel binding. Most XMPP clients and servers already have some amount of support for this technique, and in the wake of this attack, most are scrambling to make sure their implementations are complete. Many service providers have also added CAA DNS records which can prevent the very specific way this attack was executed from succeeding.

We’ve been hard at work on a different tool that can also help with defense-in-depth for this kind of situation. Ultimately, a MITM will use a different public key from the one the server uses, even if it is wrapped in a signed certificate declared as valid by a trustworthy authority (like Let’s Encrypt). If we know what key is seen when trying to connect, and we know what key the server administrator expects us to see, we can detect an ongoing MITM of this variety even when the certificate presented is valid. The tool we have developed is in early testing now. We call it CertWatch.

The premise is simple. The server administrator knows exactly what public/private keypair they are using (or can easily find out) and publishes this in DNSSEC-signed DNS records for our tool to find. The tool then periodically polls the XMPP server over Tor to see what certificate is presented. If the key in the certificate matches the key in the DNS zone, we know the session is not MITM’d (some caveats below). CertWatch checks the current setup of any domain entered, and if not yet declaring any keys, it displays setup instructions. It will either tell you to enable DNSSEC or it will tell you which DNS records to add. Note that these records are additive, so it is safe to add multiple sets when serving multiple domains from one host through SRV records. Once everything looks good, running a domain through CertWatch will display a success message and instructions for getting notified of any issues. It will then poll the domain periodically, and if any key mismatches are found, those subscribing to notifications will receive an alert.

Some tools change your key on every certificate renewal, which means you would have to update your zone setup every time your certificates renew. Other tools allow you to reuse existing keys and save some hassle, such as certbot with the --reuse-key option.

Caveats

If we did our polls from our main server IPs, it would be easy for any attacker to detect our probes and selectively disable the MITM attack for us, making themselves invisible. Probing over Tor gives CertWatch a different IP for every request and a traffic profile almost certainly consistent with the sort that many MITM attackers are going to want to inspect. This is not perfect, however, and it may be possible to fingerprint our probes in other ways to selectively MITM some traffic and ignore others. Just because our tool’s sessions were not MITM’d does not prove that no sessions are.

Anyone with physical access to the server may also scrape the actual certificates and keys off the disk, or use similar techniques in order to execute a MITM with exactly the same key the server operator expects and would use. The particular mitigation technique CertWatch helps administrators implement is ineffective against this. Rotating the key occasionally may help, but it really depends on the sophistication of the attacker and how much access they have.

Check it Out

So head over to CertWatch, enter your service domain, and let us know what you think.

by Stephen Paul Weber at September 28, 2024 03:44

SMS Censorship

Since almost the very beginning of JMP there have been occasional SMS and MMS delivery failures with an error message like “Rejected for SPAM”. By itself this is not too surprising, since every communications system has a SPAM problem and every SPAM blocking technique has some false positives. Over the past few years, however, the incidence of this error has gone up and up. But whenever we investigate, we find no SPAM being sent, just regular humans having regular conversations. So what is happening here? Are the SPAM filters getting worse?

In a word: yes.

It seems that in an effort to self-regulate and reduce certain kinds of “undesirable content” most carriers have resorted to wholesale keyword blocking of words not commonly found in SPAM, but referring to items and concepts the carriers find undesirable. For example, at least one major USA carrier blocks every SMS message containing the word “morphine”. How any hospital staff or family with hospitalized members are meant to know they must avoid this word is anyone’s guess, hopefully after being told their messages are “SPAM” they can guess to say “they upped Mom’s M dose” instead?

What We Are Doing

To preserve our reputation with these carriers we have begun to build an internal list of the keywords being blocked by different major carriers, and blocking all messages with those keywords ourselves rather than attempt to deliver them. While this seems like a suboptimal solution, the messages would never have been delivered anyways and this reduces the amount of “SPAM” that the carriers see coming from us. We have also insituted a cooldown such that if your account triggers a “SPAM” error from a major carrier, further messages are blocked for a short time to avoid repeated attempts to send the same message.

So what are the kinds of “undesirable content” the carriers are attempting to avoid here?

  • Obviously please do not use JMP for anything illegal. This has never been allowed and we continue to not tolerate this in any way.
  • Additionally, please avoid sexually explicit or graphically violent discussions, or discussions about drugs illegal in any part of the USA.

This is not really our policy so much as it is that of the carriers we must work with in order to continue delivering your messages to friends and family.

What You Can Do

Every JMP account comes with, as an option, a Snikket instance of your very own. As always, we highly recommend inviting friends and family you have many discussions with (especially discussions about sex, firearms, or drugs) to your Snikket instance and continuing all conversations there in private instead of broadcasting them over the phone network. Sending an invite link to your Snikket instance is easy, and anyone who uses the link will get an account on your instance, with yourself and others as a contact, set up automatically, so it is a great way to speak more securely with family and friend groups. Snikket will also enable higher quality media sharing, video calls, and many other benefits for your regular contacts.

Of course we know you will continue to need SMS and MMS for many of your contacts now and in the future, and JMP is dedicated to continuing to provide best-in-class service for person to person communication in this way as well.

by Stephen Paul Weber at September 28, 2024 03:43

Mobile-friendly Gateway to any SIP Provider

We have for a long time supported the public Cheogram SIP instance, which allows easy interaction between the federated Jabber network and the federated SIP network. When it comes to connecting to the phone network via a SIP provider, however, very few of these providers choose to interact with the federated SIP network at all. It has always been possible to work around this with a self-hosted PBX, but documentation on the best way to do this is scant. We have also heard from some that they would like hosting the gateway themselves to be easier, as increasingly people are familiar with Docker and not with other packaging formats. So, we have sponsored the development of a Docker packaging solution for the full Cheogram SIP solution, including an easy ability to connect to an unfederated SIP server

XMPP Server

First of all, in order to self-host a gateway speaking the XMPP protocol on one side, you’ll need an XMPP server. We suggest Prosody, which is already available from many operating systems. While a full Prosody self-hosting tutorial is out of scope here, the relevant configuration to add looks like this:

Component "asterisk"
    component_secret = "some random secret 1"
    modules_disabled = { "s2s" }
Component "sip"
    component_secret = "some random secret 2"
    modules_disabled = { "s2s" }

Note that, especially if you are going to set the gateway up with access to your private SIP account at some provider, you almost certaintly do not want either of these federated. So no DNS setup is needed, nor do the component names need to be real hostnames. The rest of this guide will assume you’ve used the names here.

If you don’t use Prosody, configuration for most other XMPP servers should be similar.

Run Docker Image

You’ll need to pull the Docker image:

docker pull singpolyma/cheogram-sip:latest

Then run it like this:

docker run -d \
    --network=host \
    -e COMPONENT_DOMAIN=sip \
    -e COMPONENT_SECRET="some random secret 2" \
    -e ASTERISK_COMPONENT_DOMAIN=asterisk \
    -e ASTERISK_COMPONENT_SECRET="some random secret 1" \
    -e SIP_HOST=sip.yourprovider.example.com \
    -e SIP_USER=your_sip_username \
    -e SIP_PASSWORD=your_sip_password \
    -e SIP_JID=your-jabber-id@yourdomain.example.com \
    singpolyma/cheogram-sip:latest

If you just want to connect with the federated SIP network, you can leave off the SIP_HOST, SIP_USER, SIP_PASSWORD, and SIP_JID. If you are using a private SIP provider for connecting to the phone network, then fill in those values with the connection information for your provider, and also your own Jabber ID so it knows where to send calls that come in to that SIP address.

Make a Call

You can now make a call to any federated SIP address at them\40theirdomain.example.com@sip and to any phone number at +15551234567@sip which wil route via your configured SIP provider.

You should even be able to use the dialler in Cheogram Android:

Cheogram Android Dialler Cheogram Android Dialler

Inbound calls will route to your Jabber ID automatically as well.

What About SMS?

Cheogram SIP does have some basic support for SIP MESSAGE protocol, so if your provider has that it may work, but more testing and polish is needed since this is not a very common feature at providers we have tested with.

Where to Learn More

If you have any questions or feedback of any kind, don’t hesistate to stop by the project channel which you can get on the web or using your Jabber ID.

by Stephen Paul Weber at September 28, 2024 03:42

Newsletter: SMS Routes, RCS, and more!

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

SMS Censorship, New Routes

We have written before about the increasing levels of censorship across the SMS network. When we published that article, we had no idea just how bad things were about to get. Our main SMS route decided at the beginning of April to begin censoring all messages both ways containing many common profanities. There was quite some back and forth about this, but in the end this carrier has declared that the SMS network is not meant for person-to-person communication and they don’t believe in allowing any profanity to cross their network.

This obviously caused us to dramatically step up the priority of integration with other SMS routes, work which is now nearing completion. We expect very soon to be offering long-term customers with new options which will not only dramatically reduce the censorship issue, but also in some cases remove the max-10 group text limit, dramatically improve acceptance by online services, and more.

RCS

We often receive requests asking when JMP will add support for RCS, to complement our existing SMS and MMS offerings. We are happy to announce that we have RCS access in internal testing now. The currently-possible access is better suited to business use than personal use, though a mix of both is certainly possible. We are assured that better access is coming later in the year, and will keep you all posted on how that progresses. For now if you are interested in testing this, especially if you are a business user, please do let us know and we’ll let you know when we are ready to start some testing.

One thing to note is that “RCS” means different things to different people. The main RCS features we currently have access to are typing notifications, displayed/read notifications, and higher-quality media transmission.

Cheogram Android

Cheogram Android 2.15.3-1 was released this month, with bug fixes and new features including:

  • Major visual refresh, including optional Material You
  • Better audio routing for calls
  • More customizable custom colour theme
  • Conversation read-status sync with other supporting apps
  • Don’t compress animated images
  • Do not default to the network country when there is no SIM (for phone number format)
  • Delayed-send messages
  • Message loading performance improvements

New GeoApp Experiment

We love OpenStreetMap, but some of us have found existing geocoder/search options lacking when it comes to searching by business name, street address, etc. As an experimental way to temporarily bridge that gap, we have produced a prototype Android app (source code) that searches Google Maps and allows you to open search results in any mapping app you have installed. If people like this, we may also extend it with a server-side component that hides all PII, including IP addresses, from Google, for a small monthly fee. For now, the prototype is free to test and will install as “Maps+” in your launcher until we come up with a better name (suggestions welcome!).

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at September 28, 2024 03:41

Newsletter: eSIM Adapter (and Google Play Fun)

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

eSIM Adapter

This month we’re pleased to announce the existence of the JMP eSIM Adapter. This is a device that acts exactly like a SIM card and will work in any device that accepts a SIM card (phone, tablet, hotspot, Rocket Stick), but the credentials it offers come from eSIMs provided by the user. With the adapter, you can use eSIMs from any provider in any device, regardless of whether the device or OS support eSIM. It also means you can move all your eSIMs between devices easily and conveniently. It’s the best of both worlds: the convenience of downloading eSIMs along with the flexibility of moving them between devices and using them on any device.

So how are eSIMs downloaded and written to the device in order to use them? The easiest and most convenient way will be the official Android app, which will of course be freedomware and available in F-droid soon. The app is developed by PeterCxy of OpenEUICC fame. If you have an OS that bundles OpenEUICC, it will also work for writing eSIMs to the adapter. The app is not required to use the adapter, and swapping the adapter into another device will work fine. What if you want to switch eSIMs without putting the card back into an Android device? No problem; as long as your other device supports the standard SIM Toolkit menus, you will be able to switch eSIMs on the fly.

What if you don’t have an Android device at all? No problem, there are a few other options for writing eSIMs to the adapter. You can get a PC/SC reader device (about $20 on Amazon for example) and then use a tool such as lpac to download and write eSIMs to the adapter from your PC. Some other cell modems may also be supported by lpac directly. Finally, there is work in progress on an optional tool that will be able to use a server (optionally self-hosted) to facilitate downloading eSIMs with just the SIM Toolkit menus.

There is a very limited supply of these devices available for testing now, so if you’re interested, or just have questions, swing by the chatroom (below) and let us know. We expect full retail roll-out to happen in Q2.

Cheogram Android

Cheogram Android saw a major new release this month, 2.13.4-1 includes a visual refresh, many fixes, and some features including:

  • Allow locally muting channel participants
  • Allow setting subject on messages and threads
  • Display list of recent threads in channel details
  • Support full channel configuration form for owners
  • Register with channel when joining, deregister when leaving (where supported)
  • Expert setting to choose voice message codec

Is My Contact List Uploaded?

Cheogram Android has always included optional features for integrating with your local Android contacts (if you give permission). If you add a Jabber ID to an Android contact, their name and image are displayed in the app. Additionally, if you use a PSTN gateway (such as cheogram.com, which JMP acts as a plugin for) all your contacts with phone numbers are displayed in the app, making it easy to message or call them via the gateway. This is all done locally and no information is uploaded anywhere as part of this feature.

Unfortunately, Google does not believe us. From speaking with developers of similar apps, it seems Google no longer believe anyone who has access to the device contacts is not uploading them somewhere. So, starting with this release, Cheogram Android from the Play Store says when asking for contact permission that contacts are uploaded. Not because they are, but because Google requires that we say so. The app’s privacy policy also says contacts are uploaded; again, only because Google requires that it say this without regard for whether it is true.

Can any of your contacts be exposed to your server? Of course. If you choose to send a message or make a call, part of the message or call’s metadata will transit your server, so the server could become aware of that one contact. Similarly, if you view the contact’s details, the server may be asked whether it knows anything about this contact. And finally, if you tap the “Add Contact” button in the app to save this contact to your server-side list, that one contact is saved server-side. Unfortunately, spelling out all these different cases did not appease Google, who insisted we must say that we “upload the contact list to the server” in exactly those words. So, those words now appear.

Thanks for Reading

The team is growing! This month we welcome SavagePeanut to the team to help out with development.

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at September 28, 2024 03:40

September 26, 2024

Erlang Solutions

Erlang Concurrency: Evolving for Performance

Some languages are born performant, and later on tackle concurrency. Others are born concurrently and later build on performance. C or Rust system’s programming are examples of the former, Erlang’s Concurrency is an example of the latter.

A mistake in concurrency can essentially let all hell loose, incurring incredibly hard-to-track bugs and even security vulnerabilities, and a mistake in performance can leave a product trailing behind the competition or even make it entirely unusable to begin with.

It’s all risky trade-offs. But what if we can have a concurrent framework with competitive performance?

Let’s see how Erlang answers that question.

A look into scalable platforms

C or Rust are languages traditionally considered performant by compiling native machine instructions, enforcing manual memory management, and exposing semantics that map to that of the underlying hardware. Meanwhile, Erlang is a language famous for its top-notch concurrency and fault tolerance and was built to be that way. Lightweight processes, cheap message passing, and share-nothing semantics are some of the key concepts that make it a strength. Like everything in IT, there’s a wide range of trade-offs between these two problems.

A case of encoding and decoding protocol

An advantage of having a performance problem versus having a concurrency problem is that the former is much easier to track down. thanks to automated tooling like benchmarking and profiling. It’s a common path for Erlang. When you need performance, it offers a very easy-to-use foreign function interface. This is especially common for single-threaded algorithms, where challenges of concurrency or parallelism aren’t important and risks of a memory-unsafe language like C are minimal. The most common examples are hashing operations, protocol encoding and decoding like JSON or XML.

Messaging platforms

Many years ago, one of my first important contributions to MongooseIM, our extensible and scalable Instant Messaging server, was a few optimisations about JIDs, short for Jabber IDentifiers, the identifiers XMPP uses to identify users and their sessions. After an important cleanup of JID usage, I noticed in profiles that calls to encoding and decoding operations were not only extremely common but also suspiciously slow. So I then wrote my first NIF, implementing these basic and very simple operations in C, and later moving this code to its own repo.

Fast forward a few years, and I had the chance to revert to the original Erlang code for the encoding and later on for the decoding. The same very straightforward Erlang code has become a lot faster than the very carefully optimised C one. But this is a very simple algorithm, so, let’s push the limits to more interesting places.

The evolution of the Erlang runtime

OTP releases have been getting more and more powerful performance improvements, especially since the JIT compiler was introduced. To put things into context, when I made those NIFs back then, no JIT compiler was available at the time. Ever since the quest to beat C code has become feasible in a myriad of scenarios.

For example, one place where the community has put some emphasis on building ever more performant code was on JSON parsers. Starting from Elixir’s Jason and Poison, we also have Thoas, a pure-Erlang alternative. They all ran their benchmarks competing most dearly against jiffy, the archetypical C solution. Until recently a JSON module was incorporated directly into Erlang. See the benchmarks yourself, for encoding and decoding. It beats jiffy in virtually all scenarios, sometimes by a very good difference.

Building communities

Here’s a more interesting example. I’ve been working on a MongooseIM extension for FaceIT, the leading independent competitive gaming platform for online multiplayer PvP gamers. They provide a platform where you can find the right matching peers, that keeps the community healthy and thriving, making sure matches are fair and no cheating is allowed, and ultimately, a platform where you can build a relationship with your peers.

These communities are enabled by MongooseIM and need presence lists.

The challenge of managing presence for such a large community is two-fold. First, we want to reduce network operations by introducing pagination and real-time updates of only deltas, so that without any network redundancy your device can update the view on display. Then we want to be able to manage such large data structures within maximal performance.

Native code

This is archetypical: a very large data structure that requires fast updates is not where immutable runtime would shine, as “mutations” are in reality copying. So you consider a solution in a classical performant language, for example, C, or even better, a memory-safe one like Rust. Quickly we found a ready solution, a Rust implementation of an indexed sorted set was readily available. A very good option, probably the best around. Let’s have a measurement for reference: inserting a new element at any point in a set of 250K elements takes ~4μs. The question then is, can we give pure Erlang code a chance to compete against ~4μs?

Data structures

You can find more details about this endeavour in a guest blog I wrote for FaceIT. In a nutshell, the point of disadvantage is the enormous amount of copying that any mutation implies. So the first idea is to have a list of lists instead: this is pretty much a skip-list. A list of lists of lists is just a skip list with more lanes, and all these lanes require constant rebalancing.

Another data structure famous for requiring rebalancing is the balance binary tree and Erlang ships with an implementation of general balanced trees based on this research paper by Prof. Arne Andersson. This will be our candidate. The only thing missing is operations on indexes, as we want to know the position modifications that took place. Wikipedia has a hint here: Order Statistic Trees. In turn, extending `gb_sets` to support indexes took not more than a couple of hours, and we’re ready to go.

Benchmarking

5μs!

Adding an element at any point of a 250K list takes on average 5μs! In pure Erlang!

The algorithm has an amortised runtime of `O(1.46*log(n))`, and the logarithm base two of 250K is ~8, that is, on average it will take 12 (exactly 11.68) steps to apply any operation. And including the most unfortunate operations that required the entire 12 steps and some rebalancing, the worst case is still 17μs.

What about getting pages? Getting the first 128 elements of the set takes 92μs. To add a bit more detail, the same test takes 100μs on OTP26, that is, by only upgrading the Erlang version we already get an 8% performance improvement.

Conclusion

We’ve seen a scenario with MongooseIM JIDs where once code is rewritten in C for performance reasons could simply be reverted to the original Erlang code and beat C’s performance; a case with JSON parsing where well-crafted Erlang code could beat the unbeatable C alternative; and at last, a problem where a very large data structure once written in Rust wasn’t necessary as a simple pure-Erlang implementation was just as competitive.

We have a runtime that was born concurrently and evolved to be performant. We have a runtime that is memory-safe and makes the enormously complex problem of concurrency and distribution easy, allowing the developer to focus on the actual business case while massively reducing the costs of support and the consequences of failure. A runtime where performance does not need to be sacrificed to achieve these things. A runtime that enables you to write messaging systems (WhatsApp, Discord, MongooseIM), websites (Phoenix and the insanely performant JSON parsing), and community platforms (FaceIT).


We know how to make this all work, and how to keep the ecosystem evolving. If you want to see the power of Erlang and Elixir in your business case, contact us!


The post Erlang Concurrency: Evolving for Performance appeared first on Erlang Solutions.

by Nelson Vides at September 26, 2024 09:31

September 19, 2024

Erlang Solutions

Elixir, 7 steps to start your journey

Welcome to the series “Elixir, 7 Steps to Start Your Journey”, dedicated to those who want to learn more about this programming language and its advantages.

If you still don’t have much experience in the world of programming, Elixir can be a great option to get started in functional programming, and if you have already experimented with other programming languages, not only will it be easier for you, but I am sure that you will find the differences between programming paradigms interesting.

In any case, this series aims to help you have fun exploring Elixir and find enough reasons to choose it for your next project. I hope you enjoy it!

Why a series dedicated to Elixir?

Before fully entering the topic, I’ll share a little about my experience with Elixir and why I decided to write this series.

I discovered Elixir in 2018, I would say, by chance. Someone told me about this programming language and how wonderful it was. At that time, I had no idea, nor had I had any contact with functional programming beyond university internships. However, a few months later, ElixirConf took place in Mexico, so I attended to learn more about this technology.

The first thing that captivated me was how friendly the community was. Everyone was relaxed, having a lot of fun and sharing. The atmosphere was incredible. So, I joined this world and started collaborating on my first project with Elixir.

The start of the journey

At first, I didn’t have a good time since the project level was not that simple. 

The project used Phoenix Channels, and until then, I had not been involved in a project with real-time communication features. But to my surprise, it didn’t take me that long to understand how everything fits together; the code patterns were intuitive, there was a lot of documentation available, the syntax was lovely, and there were no files with hundreds of thousands of lines of code that made them difficult to understand.

Many years have passed since that beginning, and I continue to enjoy programming with Elixir and being surprised by all the new things emerging in this community. So, I decided to write a series of posts to share these experiences that I hope will be helpful to those who are just getting to know this programming language. Spoiler: you won’t regret it.

That being said, let’s talk about Elixir.

Let’s talk about Elixir!

Elixir is a dynamic, functional language for building scalable and maintainable applications.”

José Valim created it in 2012, and version 1.0 was released in 2014. As you can see, it is a relatively young programming language supported by an excellent foundation, the BEAM.

Elixir runs on the Erlang virtual machine known as BEAM. Some features of this machine are:

  • Simultaneously, it supports millions of users and transactions.
  • It has a mechanism to detect failures and recover from them.
  • It allows you to develop systems capable of operating without interruptions forever!
  • Allows real-time system updates without stopping or interrupting user activity.

All these properties are transmitted to Elixir; plus, as I mentioned before, the syntax is quite intuitive and pleasant, and many resources are available, so creating a project from scratch to start experimenting will be a piece of cake.

Elixir

It’s been a short introduction, so for now, it’s okay if you’re not sure what role BEAM plays in this series. In the next chapter, we will delve into it.

We only have to consider when we talk about Elixir;  it is also essential to know the fundamentals that make this programming language such a solid and reliable option. And if you don’t have much experience with functional programming, don’t worry; Elixir will help you understand the concepts while putting them into practice.

What topics will the series cover?

This series will cover the essential topics to help you develop a project from scratch and understand what is behind Elixir’s magic. 

The chapters will be divided as follows:

  1. Erlang Virtual Machine, the BEAM
  2. Understanding Processes and Concurrency
  3. Libraries and Frameworks
  4. Testing and Debugging
  5. The Elixir Community
  6. Functional Programming vs. Object-Oriented Programming
  7. My first project with Elixir!

Is this series for me?

This series is for you if you:

  •  Are starting in the web programming world and don’t know which language to choose as your first option.
  • Already have programming experience, but want to explore new options and learn more about functional programming.

Or if you are simply looking for a programming language that allows you to learn and have fun at the same time.

Next chapter

In the next post, “Erlang Virtual Machine, the BEAM”, we will talk about Erlang, the elements that make the BEAM so powerful, and how Elixir benefits from it. Don’t miss it! In the meantime, drop the team a message if you have any pressing Elixir questions.

The post Elixir, 7 steps to start your journey appeared first on Erlang Solutions.

by Lorena Mireles at September 19, 2024 08:58

Gajim

Gajim 1.9.4

Gajim 1.9.4 integrates XMPP Providers, supports Hats and brings many improvements and bug fixes. Thank you for all your contributions!

What’s New

Hats

Thanks to our contributor @nicoco, Gajim received support for Hats (XEP-0317). Hats allow to assign roles to group chat participants, i.e. “Support”, “Expert” or really anything you like to assign. Gajim displays Hats in the list of participants.

XMPP Providers integration

Gajim integrates the XMPP Providers list

Gajim integrates the XMPP Providers list

XMPP (Extensible Messaging and Presence Protocol) is like a common language both your and your chat partner’s app speak to each other. Similar to email, there is a service provider managing your account and messages. That provider speaks XMPP too.

But some XMPP providers support functionalities that other providers do not. Some support registrations via your app and some only via their websites. Some provide more space for sharing media than others or store them longer than others. That is where the XMPP Providers project comes into play. The project offers a curated list of XMPP providers, which makes it easy for you to find a suitable provider. All properties of the included providers are automatically checked on a daily basis and updated if needed. When creating a new account with Gajim, you will now be offered suggestions taken from this curated list.

What else happened

  • You can now choose if you want to synchronize group chats between your chat apps
  • Message search now offers an interface for filtering
  • Message loading performance has been improved
  • Group chat search results show more infos (reachable via Start Chat > Group chat search)
  • Typing indicator has been moved to the bottom of the chat
  • A bug which caused the chat to become very wide if messages contained long links has been fixed

Have a look at the changelog for a complete list.

For our package maintainers

Attention to all package maintainers, this release changes how Gajim is build. Please check README.md for more information.

Gajim

As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab.

Gajim is free software developed by volunteers.
If you like to support Gajim, please consider making a donation.

Donate via Liberapay:

September 19, 2024 00:00

September 17, 2024

Ignite Realtime Blog

Openfire 4.9.0 release!

The Ignite Realtime community is happy to be able to announce the immediate availability of version 4.9.0 of Openfire, its cross-platform real-time collaboration server based on the XMPP protocol!

As compared to the previous non-patch release, this one is a bit smaller. This mostly is a maintenance release, and includes some preparations (deprecations, mainly) for a future release.

Highlights for this release:

  • A problem has been fixed that caused, under certain conditions, a client connection to be disconnected. This appears to have affected clients sending multi-byte character data more than others.
  • Community member Akbar Azimifar has provided a full Persian translation for Openfire!

The list of changes that have gone into the Openfire 4.9.0 release has some more items though! Please review the change log for all of the details.

Interested in getting started? You can download installers of Openfire here. Our documentation contains an upgrade guide that helps you update from an older version.

The integrity of these artifacts can be checked with the following sha256sum values:

7973cc2faef01cb2f03d3f2ec59aff9b2001d16b2755b4cc0da48cc92b74d18a  openfire-4.9.0-1.noarch.rpm
a0cd627c629b00bb65b6080e06b8d13376ec0a4170fd27e863af0573e3b4f791  openfire_4.9.0_all.deb
bf62c02b0efe1d37fc505f6942a9cf058975746453d6d0218007b75b908a5c3c  openfire_4_9_0.dmg
1082d9864df897befa47230c251d91ec0780930900b2ab2768aaabd96d7b5dd9  openfire_4_9_0.exe
12a4a5e5794ecb64a7da718646208390d0eb593c02a33a630f968eec6e5a93a0  openfire_4_9_0.tar.gz
c86bdb1c6afd4e2e013c4909a980cbac088fc51401db6e9792d43e532963df72  openfire_4_9_0_x64.exe
97efe5bfe8a7ab3ea73a01391af436096a040d202f3d06f599bc4af1cd7bccf0  openfire_4_9_0.zip

We would love to hear from you! If you have any questions, please stop by our community forum or our live groupchat. We are always looking for volunteers interested in helping out with Openfire development!

For other release announcements and news follow us on Mastodon or X

6 posts - 4 participants

Read full topic

by guus at September 17, 2024 19:42

Openfire HTTP File Upload plugin v1.4.1 release!

We have now released version 1.4.1 of the HTTP File Upload plugin!

This plugin adds functionality to Openfire that allows clients to share files, as defined in the XEP-0363 ‘HTTP File Upload’ specification.

This release brings two changes, both provided by community members (thanks!):

  • Vladislav updated the Ukrainian translation;
  • Anno created an admin console page for this plugin.

As always, your instance of Openfire should automatically make available to update in the next few hours. Alternatively, you can download the new release of the plugin at the HTTP File Upload plugin’s archive page.

For other release announcements and news follow us on Mastodon or X

1 post - 1 participant

Read full topic

by guus at September 17, 2024 19:17

September 12, 2024

Ignite Realtime Blog

Openfire Hazelcast plugin version 3.0.0

Earlier today, we blogged about a boatload of Openfire plugins for which we made available maintenance releases.

Apart from that, we’ve also made a more notable release: that of the Hazelcast plugin for Openfire.

The Hazelcast plugin for Openfire adds clustering support to Openfire. It is based on the Hazelcast platform.

This release brings a major upgrade of the platform. It migrates from version 3.12.5 to 5.3.7.

As a result, replacing an older version of the plugin with this new release requires some careful planning. Notably, the configuration stored on-disk has changed (and is unlikely to be compatible between versions). Please refer to the readme of the plugin for details.

A big thank you goes out to community member Arwen for making this upgrade happen!

As usual, the new version of the plugin will become available in your Openfire server within the next few hours. Alternatively, you can download the plugin from its archive page.

For other release announcements and news follow us on Mastodon or X

1 post - 1 participant

Read full topic

by guus at September 12, 2024 19:26

Openfire plugin maintenance releases!

The Ignite Realtime community is gearing up for a new release of Openfire. In preparation, we have been performing maintenance releases for many Openfire plugins.

These Openfire plugin releases have mostly non-functional changes, intended to make the plugin compatible with the upcoming 4.9.0 release of Openfire:

The following plugins have (also) seen minor functional upgrades:

As usual, the new versions of the plugins should become available in your Openfire server within the next few hours. Alternatively, you can download the plugins from their archive pages, which are linked to above.

For other release announcements and news follow us on Mastodon or X

1 post - 1 participant

Read full topic

by guus at September 12, 2024 19:09

September 11, 2024

JMP

Newsletter: eSIM Adapter Launch!

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

eSIM Adapter

We’ve talked before about the eSIM Adapter, but today we’re excited to announce that we have a good amount of production stock, and you can order the eSIM adapter right now. Existing JMP customers who want to pay with their account balance can also order by contacting support. Have a look at the product launch on Product Hunt as well.

JMP’s eSIM Adapter is a device that acts exactly like a SIM card and will work in any device that accepts a SIM card (phone, tablet, hotspot, USB modem), but the credentials it offers come from eSIMs provided by you. With the adapter, you can use eSIMs from any provider in any device, regardless of whether the device or OS support eSIM. It also means you can move all your eSIMs between devices easily and conveniently. It’s the best of both worlds: the convenience of downloading eSIMs along with the flexibility of moving them between devices and using them on any device.

For JMP Data Plan Physical SIM Owners

Our data plan has always had the choice for a physical SIM. For people who just want the data plan and no other eSIMs this works fine, and we will continue to sell these legacy cards until we run out of stock. However some of you might be wondering if you need to buy an eSIM Adapter now in order to get some of these benefits. The answer might be no! If you order just the USB reader, you can use the app to flash new eSIMs and switch profiles on your existing physical SIM! This isn’t quite as convenient as the full eSIM Adapter, you will need to pop out the SIM and put it into the USB reader even to switch profiles, but it does work for those who have one already.

Cheogram Android

Cheogram Android 2.15.3-3 and 2.15.3-4 have been released. These releases contain some improvements to the embedded “widget” system, funded by NLnet. You can now select from a large list of widgets right in the app. More improvements to this system are coming soon, and if you’re a web-tech developer who is interested in extending people’s chat clients, check out the docs!

Email Gateway

We sponsor the development of an email gateway, Cheogram SMTP, which is also getting better thanks to NLnet. The gateway now supports file attachments on emails, and will soon support sharing widgets with Delta Chat users as well!

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at September 11, 2024 15:14

Snikket

Snikket Server - September 2024 release

We hope you’ve been having a good summer (at least if you’re up here in the northern hemisphere). Today we’re back with a new release of the self-hosted Snikket server software.

This software is what’s at the core of the Snikket project - a self-hostable “personal messaging server in a box”. If you wish for something like Messenger, WhatsApp or Signal, but not using their servers, Snikket is for you. Once deployed, you can create invitation links for family, friends, colleagues… any kind of social group is the main target audience for Snikket. The invitation links walk even the least-technical people safely through downloading the Snikket app and joining your private Snikket instance.

If you’re not a self-hoster, we also have a hosted version which lets you get your own instance started in just a few clicks.

What’s new

Some highlights of what changes this release brings:

Invitations

We’ve made a number of small but important improvements to the way invitations are created and managed.

For example, people often told us that after creating a few invitation links and sending them out, they would forget who each link was created for and why. Now Snikket allows you to attach a brief custom note to invitations, visible only to admins in the list of pending invitations, making it easy to see at a glance who has yet to accept their invitation.

Screenshot of the new invitation form comment field

Example of the new invitation form, which allows adding an optional comment

Meanwhile we’ve added an important feature that was missing from invitations - you can now specify the role that will be applied to anyone who joins using a given invitation link. Previously, if you wanted to set up e.g. a child with a “limited” account, they would first join as a normal user, and then you would need to navigate to the user management page and assign them the “Limited” role. With this new release, you are able to assign the role directly when you create the invitation, which is much simpler and more secure.

Screenshot of the invitation role selection options

Invitations now allow selecting the user’s role before they sign up

A really handy feature we’ve added is the ability to share the invitation link directly through other apps, if your web browser supports it (which most mobile browsers do). This can make it much easier to send invitation links via SMS/email or other apps in a couple of taps, without manual copy and pasting.

Screenshot showing an invitation, and a list of apps the link can be shared to

An example of the invitation sharing feature in Firefox on Android

Changes to blocking

Though uncommonly used on private servers, it’s nevertheless possible to block people in Snikket. When you do this, the blocked person would receive a delivery error when attempting to send a message to someone who had blocked them. From this error, it’s possible to deduce that you have been blocked.

Based on feedback, we have adjusted this so that no delivery error is sent to people you have blocked.

Technical stuff

We’ve added a few new things that are not present in the interface, but are of interest to people deploying Snikket.

It is now possible to adjust the port of the STUN/TURN server. By default this adjusts the port of the internal server that is provided with Snikket, but if you have configured an external TURN server then it means you are now able to host that on non-standard ports too.

Self-hosted instances are now able to use International Domain Names (IDNs), i.e. domain names that contain unicode characters. This feature is not yet available for instances hosted by Snikket, but let us know if you’re interested.

Other

Of course these are just the highlights. We’ve also improved a bunch of things under the hood, either in Snikket or as part of Prosody, the open-source project which powers Snikket’s chat connections, and has been updated in this new release.

For more information, and more changes in this release, check out the release notes.

Upgrading

Upgrading an existing installation is super simple and takes less than a minute! You can find instructions in the ‘Upgrading’ section of the release notes.


If you have any questions or feedback about the new release, come and join the discussion in our community chat.

We hope you enjoy Snikket. Happy chatting!

by Snikket Team (team@snikket.org) at September 11, 2024 00:00

September 09, 2024

Erlang Solutions

How Generative AI is Transforming Healthcare

Generative AI (Gen AI) has emerged as a transformative technology across the healthcare industry. It has the potential to vastly transform the clinical decision-making process and ultimately improve patient health outcomes. 

The adoption of generative AI is now valued at over $1.6 billion and the global AI market in the healthcare market is projected to reach $45.2 billion by 2026.  

Given such rapid growth projections, healthcare providers should recognise the opportunities presented by generative AI and explore how to incorporate it to improve patient care.

We will take an in-depth look into Gen AI and provide insights into how it is poised to revolutionise the healthcare space. 

Understanding Generative AI in healthcare

So traditional vs generative AI- what’s the difference? 

Before we compare the two, let’s take a moment to clearly define them:

Traditional AI- Also known as narrow or weak AI is a branch of artificial intelligence that executes tasks based on pre-established algorithms. These systems are usually specialised and excel at specific functions. However, they have a restricted range of applications compared to other types of AI.

Examples include: Chatbots, voice assistants, credit score systems and spam filters.

Generative AI- Is a form of artificial intelligence that can produce new outputs. These include text, images, and other types of data. It starts by analysing large amounts of existing data. Those insights are then used to create fresh content. Generative AI uses machine learning to identify patterns, make predictions, and generate new material based on the data it processes.

Generative AI and predictive AI are quite similar, as predictive AI is also a learning-based technology that identifies and anticipates patterns.

As well as data analysis, generative AI needs direct human input, or what is more commonly known as a “’prompt,’ to guide the AI’s output. Prompts aren’t just text focus. They also include videos, graphics, or audio.

Key differences between Traditional and Generative AI

Now let’s look into the main differences between traditional and generative AI:

AspectTraditional AIGenerative AI
ApplicationsIs most commonly used for data analysis, forecasting, and optimisation tasks where rules are well-defined.Best for tasks such as image recognition, natural language processing (NLP), and sentiment analysis, where patterns in unstructured data are key.
Data- handlingBest for structured data and tasks requiring precise, rule-based decision-making.Best at managing and interpreting large volumes of unstructured data, such as images, videos, and text.
Rules-based v learning-basedOperates on explicit rules that are programmed by humans.Learns from data and adapts its behaviour according to the patterns it discovers.
Flexibility and adaptabilityRigid and less adaptable. It requires manual updates to handle new and/or unexpected situations.Highly flexible and capable of learning from diverse datasets. It can adjust to new scenarios without manual intervention.
Creativity and autonomyLacks creative abilities and autonomy and is confined to predefined rules and tasks.Highly capable of autonomously generating content, such as images and text, demonstrating creativity beyond rule-based systems.
Learning capabilitiesIs dependent on predefined rules and algorithms and requires human intervention for updates or adjustments.It utilises deep learning to continuously improve by analysing data, identifying patterns, and making predictions, making it highly adaptable.

While there are no clear “winners”, generative AI’s advantage lies in its broader offering. Its strength comes from using models to create unique content, without the need to rely on clear and rigid rules. Compared to its traditional counterpart, generative AI allows for greater flexibility and problem-solving.

For healthcare industry leaders, the strategic value of Generative AI cannot be overstated. Its benefits provide the ability to foster a culture of continuous improvement, enabling organisations to innovate in a rapidly evolving industry.

Enhancing patient care and outcomes with Generative AI

Healthcare leaders have long understood the role of personalised care in meeting patient needs. Generative AI provides the opportunity to create personalised care plans, with the ability to leverage patient experience and achieve significantly better health outcomes. 

Here are some of generative AI’s main growing applications: 

Personalised treatment plans

A standout advantage of generative AI in healthcare is its ability to create treatment plans that are uniquely tailored to each patient. Traditional approaches are dependent on standardised protocols, which might not fully address the specific needs of every individual. 

But generative AI can analyse extensive patient data- from medical history and genetic information to lifestyle choices and social factors- to develop highly personalised treatment recommendations.

AI can examine a patient’s genetic likelihood of developing certain conditions and suggest preventative measures or specific treatment options. By offering customised care recommendations, healthcare professionals can enhance the effectiveness of treatments and reduce the chances of adverse effects, leading to improved health outcomes.

Predictive analytics

Generative AI also offers significant potential in the realm of predictive analytics, helping healthcare providers foresee and prevent potential health issues before they become critical. By examining large datasets of patient information, such as vital signs, lab results, and diagnostic images, AI algorithms can detect patterns that may signal future health risks.

For example, researchers at Mayo Clinic utilised generative AI to develop a deep-learning algorithm to forecast the likelihood of patient complications following surgery. 

It can produce customised treatment recommendations, depending on the risk.

Enhanced diagnostic accuracy

Diagnosis mistakes are a massive concern within healthcare. They can lead to incorrect or delayed treatment, which can lead to life-threatening consequences for patients. Generative AI provides enhanced accuracy of diagnoses by supporting healthcare professionals in their decision-making processes.

Generative AI systems can:

  • Examine medical images, such as X-rays, MRIs, and pathology slides, with incredible precision.
  • Identify even the smallest abnormalities that might be missed by the human eye.
  • Recognise patterns that are linked to particular diseases by learning from sizable datasets.
  • Analyse symptoms and clinical data to suggest possible diagnoses. 

By improving diagnostic accuracy, generative AI helps to ensure timely and correct treatments. It also reduces the likelihood of unnecessary procedures and time delays, which ultimately ease the strain on healthcare systems.

Virtual health assistants

Chatbots that are powered by AI act as virtual health assistants (VHAs) for patients. They can provide a wealth of information, such as answering medical queries, providing relevant health information, medication reminders, personalised health advice and necessary support for chronic conditions- to name a few services.

DAX Express, a notable US healthcare provider integrates GPT-4, a generative AI technology, to automate clinical documentation. This application listens to patient-physician interactions and then generates medical notes. The notes are directly uploaded into Electronic Health Records (EHR) systems, removing the need for a human review.

The use of gen AI in DAX Express significantly speeds up the documentation process. It reduces it from hours to seconds, a substantial benefit of VHAs. This also improves patient care by allowing doctors more time with patients, enhancing the accuracy of medical records, and speeding up follow-up treatments.

Operational efficiency and cost reduction

As briefly mentioned, AI’s ability to provide transformative solutions that streamline processes and reduce costs, while enhancing patient care is a major driver for healthcare service providers  Various AI-driven technologies including GenAI are reshaping the way healthcare organisations operate, so both providers and patients benefit from these advancements.

Wearables to enhance patient monitoring

Fitbits, Apple watches, Oura rings and ECG machines are just a few examples of AI-powered wearables that are transforming patient care through continuous health monitoring. 

According to research by Deloitte, wearable technologies are expected to reduce 16% of hospital costs by 2027, and by 2037, it could save $200 billion with its remote patient monitoring devices. These devices are designed to catch potential health issues early, reducing the need for emergency interventions and, therefore lowering the frequency of hospital visits. They also empower patients to manage their health proactively, thanks to personalised reminders, which are aimed to foster healthier habits (think of fitness rings on your Apple watch).

This approach optimises healthcare resources and contributes to reducing overall healthcare costs.

Accelerating drug discovery and reducing costs

In the pharmaceutical sector, generative AI is accelerating drug discovery processes for faster treatment development, especially in underserved medical areas. 

Gen AI models analyse large amounts of data to identify new disease markers, streamlining clinical trials and reducing their duration by up to two years. 

Improved accessibility and inclusivity

AI-driven chatbots and digital assistants are breaking down barriers to healthcare access. They support multiple languages and provide user-friendly interactions. 

These tools make healthcare more inclusive, especially for patients with disabilities or those who face language barriers. By ensuring that everyone can access care without obstacles, AI enhances patient satisfaction and promotes equity in healthcare delivery.

Optimising data management and analysis

The healthcare industry generates vast amounts of data daily, and managing this information efficiently is crucial. Traditional AI assists in organising and analysing healthcare data, which helps professionals extract valuable insights quickly. 

Generative AI can sift through electronic health records, research articles, and clinical notes to identify patterns and trends, enabling more informed decision-making and improving the overall quality of care.

Enhancing supply chain and equipment utilisation

AI is also playing a critical role in optimising the healthcare supply chain and equipment usage. According to research from Accenture, 43% of all working hours across end-to-end supply chain activities could be impacted by gen AI in the near future.

AI algorithms can provide actionable insights into the best supplies or drugs to use, considering cost, quality, and patient outcomes. Additionally, gen AI helps schedule diagnostic equipment, ensuring that costly machines like MRIs and CT scanners are utilised to their full potential. This not only reduces operational costs but also enhances the overall efficiency of healthcare delivery.

Integrating Elixir in Generative AI in Healthcare

As healthcare organisations increasingly adopt generative AI technologies, integrating the right tools and technologies is crucial for maximising their potential. Elixir, a functional and concurrent programming language is well-suited to underpin the delivery of AI models, due to its scalability, performance and ability to handle large-scale data processing efficiently. Healthcare organisations can leverage Elixir to ensure robust, reliable deployment of their generative AI technologies, to drive innovation and enhance capabilities.

Let’s look into some of the key features of Elixir:

  • Concurrency: Elixir excels at managing numerous simultaneous tasks, crucial for handling the large-scale data processing needs of AI applications.
  • Fault tolerance: Built to handle failures with ease, Elixir ensures continuous operation and reliability—a key trait for healthcare systems that demand high uptime.

Scalability: Elixir’s ability to scale horizontally supports the growing computational demands of advanced AI models and data analytics.

Improved data processing and management

Elixir’s concurrency capabilities allow for efficient handling of multiple data requests. This feature is essential for real-time AI applications in healthcare, for example, diagnostic tools and patient management systems. Its scalability allows healthcare organisations to build robust data pipelines, ensuring smooth data flow and faster processing.

Enhanced system reliability

Elixir supports fault-tolerant architectures. It maintains system reliability for critical healthcare applications. Its ability to recover from errors without system-wide disruption means that AI-driven healthcare solutions remain operational and dependable.

Optimised performance

Once again, Elixir’s scalability meets the growing demands of AI workloads. This improves computational efficiency and enhances overall system performance. Its support for real-time processing also improves the responsiveness of healthcare applications, providing immediate feedback and improving operational efficiency.

Integrating Elixir and other technologies with generative AI provides huge potential to enhance healthcare applications- from improving data management to optimising system performance. For business leaders, strategic planning and collaboration are key to harnessing these technologies effectively, to ensure that their organisations can capitalise on the benefits of AI while maintaining robust and reliable systems.

The future of Generative AI 

While generative AI has the proven power to revolutionise the healthcare industry, some ethical and regulatory points still need to be considered. Areas surrounding patient privacy, security and equitable access to AI-powered equipment are still a work in progress. 

Consumer trust is also a critical factor in the future of AI. To optimise AI results, business leaders need to understand consumers’ feelings towards Generative AI.

In a survey conducted by Wolters Kluwer,  while those asked did have some concerns or fears surrounding generative AI, 45% were “starting from a position of curiosity.” Over half (52%) of those surveyed further reported that they would be fine with their healthcare providers using gen AI to support their care. 

generative ai in healthcare american feelings

The trust in gen AI is not just down to the technology, but the trust of the consumer in their healthcare provider.

As these issues are addressed, a new era of improved health outcomes and more accessible medical services is just years, if not months, away from being a reality.

To conclude

AI has undeniable transformative potential. Leveraging and gaining a better understanding of this technology is key to embracing innovation in the healthcare industry and most importantly, prioritising patient-centric care.

Whether your current systems are falling short or you’re actively seeking new ways to improve patient outcomes and operational efficiency, generative AI provides a powerful solution. It can revolutionise how healthcare services are delivered- from personalised treatment plans to predictive analytics and enhanced diagnostics. 

By adopting gen AI, you can position healthcare providers at the forefront of healthcare innovation. If you’d like to talk more about your healthcare needs or how Elixir can power your AI model, feel free to drop us a line.

The post How Generative AI is Transforming Healthcare  appeared first on Erlang Solutions.

by Erlang Solutions Team at September 09, 2024 09:00

Monal IM

Monal Internals - XML Query Language

In this new series, I want to shine some light onto specific parts of Monal’s internals. It’s dedicated to programmers or people curious about how Monal works internally. If you want to give some feedback, feel free to send an email to thilo@monal-im.org

Other articles in this series:

The MLXMLNode methods

All incoming and outgoing XMPP stanzas are parsed to/from an instance of nested MLXMLNode elements. This class therefore provides some methods for creating such elements as well as querying them. In this chapter I want to briefly introduce some parts of the MLXMLNode interface before diving into our XML Query Language in the next chapter.

Creating an MLXMLNode

There are several initializers for MLXMLNode:

-(id) initWithElement:(NSString*) element;
-(id) initWithElement:(NSString*) element andNamespace:(NSString*) xmlns;
-(id) initWithElement:(NSString*) element andNamespace:(NSString*) xmlns withAttributes:(NSDictionary*) attributes andChildren:(NSArray*) children andData:(NSString* _Nullable) data;
-(id) initWithElement:(NSString*) element withAttributes:(NSDictionary*) attributes andChildren:(NSArray*) children andData:(NSString* _Nullable) data;
-(id) initWithElement:(NSString*) element andData:(NSString* _Nullable) data;
-(id) initWithElement:(NSString*) element andNamespace:(NSString*) xmlns andData:(NSString* _Nullable) data;

The initializers not taking a namespace argument will create XML nodes that automatically inherit the namespace of their containing node, once added to a tree of XML nodes.

When nesting MLXMLNodes , it looks like this:

MLXMLNode* exampleNode = [[MLXMLNode alloc] initWithElement:@"credentials" andNamespace:@"urn:xmpp:extdisco:2" withAttributes:@{} andChildren:@[
    [[MLXMLNode alloc] initWithElement:@"service"  withAttributes:@{
        @"type": service[@"type"],
        @"host": service[@"host"],
        @"port": service[@"port"],
    } andChildren:@[] andData:nil]
] andData:nil]

Querying a (possibly nested) MLXMLNode

All XML queries are implemented as an interface of MLXMLNode as well. For XML queries this class has three different methods:

-(NSArray*) find:(NSString* _Nonnull) queryString, ... NS_FORMAT_FUNCTION(1, 2);
-(id) findFirst:(NSString* _Nonnull) queryString, ... NS_FORMAT_FUNCTION(1, 2);
-(BOOL) check:(NSString* _Nonnull) queryString, ... NS_FORMAT_FUNCTION(1, 2);

find: will return an NSArray listing all results matching your query, findFirst: will only return the first result of your query (or nil if the resulting NSArray was empty). This should be used, if you are certain that there should only be one element matching (or none at all). check: can be used to determine if find: would return an empty NSArray.

All three methods take a string argument possibly containing printf-style format specifiers including the %@ specifier as supported by NSString stringWithFormat: and a variable argument list for providing the values for these format specifiers.

The Query Language

To query single values out of a complex XML stanza, we use a XML query language inspired by XPath, but not compatible with it. Instead, our language, as implemented in Monal, is a strict superset of Prosody’s query language as documented in Prosody’s documentation of util.stanza. This makes it possible to copy over queries from Prosody and directly use them in Monal without any modification.

The query language consists of a path followed by an optional extraction command and conversion command and is parsed by complex regular expressions in MLXMLNode.m. These regular expressions and the usage of the xml language throughout Monal were security audited in 2024.

Note: If the following description talks about the find: method, the findFirst: and check: methods are automatically included.

Path Segments

The path is built of /-separated segments each representing an XML node, selected by either an XML namespace or an element name or both. The XML namespace is wrapped in { } and prefixes the element name. Each path segment is used to select all XML nodes matching the criteria listed in this path segment. The special wildcard value * for element name or namespace mean “any namespace” or “any element”.

If the namespace is omitted, the namespace of the parent node in the XML tree the query is acted upon is used (or * , if there is no parent node), see example 0. The namespace of the parent node is used even if the find: method is executed on a child XML node, see example 1. The element name can not be omitted and should be set to * if unknown.

A path beginning with a / is called a rooted query. That means the following first path segment is to be used to select the node the find: method is called on, if the leading / is omitted, the first path segment is used to select the child nodes of the node the find: method is called on.

Note: If using such a rooted query to access attributes, element names etc. of the XML node the whole query is acting upon, both the element name and namespace can be fully omitted and are automatically replaced by {*}*. This allows us to write queries like /@h|int" or /@autojoin|bool.

The special path segment with element name .. not naming any namespace or other selection criteria (e.g. /../) will ascend one node in the XML node tree to the parent of the XML node that the query reached and apply the remaining query to this XML node. Thus using /{jabber:client}iq/{http://jabber.org/protocol/pubsub}pubsub/items/../../../@type will return the value of the type attribute on the root element (the {jabber:iq}iq).

Note: Not using an extraction command (see the next chapter below) will return the matching MLXMLNodes as reference. Changing the attributes etc. of that reference will change the original MLXMLNode in the XML tree it is part of. If you don’t want that, you’ll have to call copy on the returned MLXMLNodes to decouple them from their original.

Example 0:

<message from='test@example.org' id='some_id' xmlns='jabber:client'>
    <body>Message text</body>
    <body xmlns='urn:some:different:namespace'>This will NOT be used</body>
</message>
MLXMLNode* message = <the stanza above as MLXMLNode tree>;
NSArray<NSString*>* bodyStrings = [message find:@"body#"];

MLAssert(bodyStrings.count == 1, @"Only one body text should be returned!");
MLAssert([bodyStrings[0] isEqualToString:@"Message text", @"The body with inherited namespace 'jabber:client' should be used!");

Example 1:

<message from='test@example.org' id='some_id' xmlns='jabber:client'>
    <body>Message text</body>
</message>
MLXMLNode* message = <the stanza above as MLXMLNode tree>;
NSString* messageId = [message findFirst:@"/@id"];

MLAssert([messageId isEqualToString:@"some_id", @"The extracted message id should be 'some_id'!");

More selection criteria

  • Not element name:
    If you want to select all XML nodes not having a specified name, you’ll have to prefix the element name with !. This will negate the selection, e.g. !text will select all XML nodes not named text, see example 2.
  • Element attribute equals value:
    If you want to select XML nodes on the basis of their XML attributes, you can list those attributes as attributeName=value pairs each inside < >, see example 3. You can use format string specifiers in the value part of those pairs to replace those with the variadic arguments of find:. The order of variadic arguments has to resemble all format specifiers of the complete query string given to find: Note: the value part of those pairs can not be omitted, use regular expression matching to select for mere XML attribute presence (e.g. <attributeName~^.*$>).
  • Element attribute matches regular expression:
    To select XML nodes on the basis of their XML attributes, but using a regular expression, you’ll have to use attributeName~regex pairs inside < >. No format string specifiers will be replaced inside your regular expression following the ~. You’ll have to use ^ and $ to match begin and end of the attribute value yourself, e.g. <attributeName~.> will match all attribute values having at least one character, while <attributeName~^.$> will match all attribute values having exactly one character.

Example 2:

<stream:error>
    <not-well-formed xmlns='urn:ietf:params:xml:ns:xmpp-streams'/>
    <text xmlns='urn:ietf:params:xml:ns:xmpp-streams'>Some descriptive Text...</text>
</stream:error>
MLXMLNode* streamError = <the stanza above as MLXMLNode tree>;
NSString* errorReason = [streamError findFirst:@"{urn:ietf:params:xml:ns:xmpp-streams}!text$"];

MLAssert([errorReason isEqualToString:@"not-well-formed"], @"The extracted error should be 'not-well-formed'!");

Example 3 (also using an extraction command, see below):

<iq id='605818D4-4D16-4ACC-B003-BFA3E11849E1' to='user@example.com/Monal-iOS.15e153a8' xmlns='jabber:client' type='result' from='asdkjfhskdf@messaging.one'>
    <pubsub xmlns='http://jabber.org/protocol/pubsub'>
        <subscription node='eu.siacs.conversations.axolotl.devicelist' subid='6795F13596465' subscription='subscribed' jid='user@example.com'/>
    </pubsub>
</iq>
MLXMLNode* iq = <the stanza above as MLXMLNode tree>;
NSString* subscriptionStatus = [iq findFirst:@"/<type=result>/{http://jabber.org/protocol/pubsub}pubsub/subscription<node=%@><jid=%@>@subscription", @"eu.siacs.conversations.axolotl.devicelist", @"user@example.com"];

MLAssert([subscriptionStatus isEqualToString:@"subscribed"], @"The extracted value of the subscription attribute should be 'subscribed'!");

Extraction Commands

An extraction command can be appended to the last path segment. Without those extraction commands, find: will return the full MLXMLNode matching the selection criteria of the XML query. If you rather want to read a special attribute, element value etc. of the full XML node, you’ll have to use one of the extractions commands below

  • @attributeName:
    This will return the value of the attribute named after the @ as NSString, use a conversion command to convert the value to other data types.
  • @@: This will return all attributes of the selected XML node as key-value-pairs in an NSDictionary. No conversion commands can be used together with this extraction command.
  • #: This will return the text contents of the selected XML node as NSString, use a conversion command to convert the value to other data types.
  • $: This will return the element name of the selected XML node as NSString. This is only really useful if the last path segment contained a wildcard element name or its element name was negated. A Conversion command can be used to convert the returned element name to other data types as well.

For data-form (XEP-0004) subqueries, see the corresponding section below.

Conversion Commands

Conversion commands can be used to convert the returned NSString of an extraction command to some other data type. Conversion commands can not be used without an extraction command and must be separated from the preceeding extraction command by a pipe symbol (|). The following conversions are currently defined:

  • bool:
    This will convert the extracted NSString to an NSNumber representing a BOOL. true/1 becomes @YES and false/0 becomes @NO. This is in accordance to the representation of truth values in XMPP.
  • int:
    This will convert the extracted NSString to an NSNumber representing a NSInteger (integerValue property).
  • uint:
    This will convert the extracted NSString to an NSNumber representing a NSUInteger (unsignedIntegerValue property).
  • double:
    This will convert the extracted NSString to an NSNumber representing a double (doubleValue property).
  • datetime:
    This will use the HelperTools method parseDateTimeString: to parse the given NSString into an NSDate object.
  • base64:
    This will use the HelperTools method dataWithBase64EncodedString: to parse the given NSString into an NSData object.
  • uuid:
    This will try to parse the given NSString into an NSUUID object using the initWithUUIDString initializer of NSUUID. This will return nil for an invalid string, which will omit this result from the NSArray returned by find: (findFirst: will return nil, and check: will return NO).
  • uuidcast:
    This will do the same as the uuid conversion command for valid uuid strings, but use the HelperToolsmethod stringToUUID to cast any other given string to a UUIDv4 by hashing it using SHA256 and arranging the result to resemble a valid UUIDv4.

Example 4 (attribute extraction command together with a bool conversion command):

<iq type='result' id='juliet1'>
  <fin xmlns='urn:xmpp:mam:2' complete='true'>
    <set xmlns='http://jabber.org/protocol/rsm'>
      <first index='0'>28482-98726-73623</first>
      <last>09af3-cc343-b409f</last>
    </set>
  </fin>
</iq>
MLXMLNode* iqNode = <the stanza above as MLXMLNode tree>;
if([[iqNode findFirst:@"{urn:xmpp:mam:2}fin@complete|bool"] boolValue])
    DDLogInfo(@"Mam query finished")

Example 5 (attribute extraction command together with a datetime conversion command):

<message from='romeo@montague.net/orchard' to='juliet@capulet.com' type='chat'>
    <body>O blessed, blessed night! I am afeard.</body>
    <delay xmlns='urn:xmpp:delay' from='capulet.com' stamp='2002-09-10T23:08:25Z'/>
</message>
MLXMLNode* messageNode = <the stanza above as MLXMLNode tree>;
NSDate* delayStamp = [messageNode findFirst:@"{urn:xmpp:delay}delay@stamp|datetime"];

MLAssert(delayStamp.timeIntervalSince1970 == 1031699305, @The delay stamp should be 1031699305 seconds after the epoch!");

Some more queries as found in our codebase:

  • {urn:xmpp:jingle:1}jingle<action~^session-(initiate|accept)$>
  • error/{urn:ietf:params:xml:ns:xmpp-stanzas}item-not-found
  • {urn:xmpp:avatar:metadata}metadata/info
  • {urn:xmpp:avatar:data}data#|base64

The data-forms (XEP-0004) query language extension

To query fields etc. of a XEP-0004 data-form, the last path segment of an XML query can contain a data-forms subquery. Thes parser for these subqueries is an MLXMLNode extension implemented in XMPPDataForm.m and glued into MLXMLNode.m as the extraction command \ (backslash). This extraction command is also special as it has to be terminated by a \ (optionally followed by a conversion command, see below).

Note: since our query is a string, double backslashes (\\) have to be used because of string escaping rules.

Like other extraction commands, these subqueries must be in the last path segment. Naming the element name and namespace of the node this extraction command is applied to, is optional and automatically defaults to name x and namespace jabber:x:data as defined by XEP-0004.

This query language extension is its own small query language tailored to data-forms implemented in -(id _Nullable) processDataFormQuery:(NSString*) query;. To ease its use, this language reuses some constructs of the main query language, but gives them a new meaning:

  • “Namespace” and “element name”:
    The subquery can begin with something looking like a namespace and element name (both optional) like so: {http://jabber.org/protocol/muc#roominfo}result. The “element name” is used to select data forms with this form-type (result in this case). The “namespace” is used to select data-forms with a form field (usually of type hidden) with name FORM_TYPE having this value, see example 6. The special form-type * and FORM_TYPE value * can be used to denote “any form-type” and “any FORM_TYPE field value”.
  • Item index:
    This is something not present in the main query language. Between the form-type (the “element name”, see above) and the “extraction command” (see below) an index in square brackets is allowed ([0]). An example query using an index as seen in our codebase would be \\result[0]@expire\\ or \\[0]@expire\\. An index is only allowed for data-forms having multiple item elements encapsulating the form fields, see example 8 of XEP-0004. If the index is out of bounds (e.g. greater than or equal to the count of <item/> XML nodes in the form), the data-form query will return nil, which will be omitted from the resulting NSArray by the MLXMLNode implementation of find: (findFirst: will return nil, and check: will return NO).
  • Extraction command:
    Data-Form subqueries have only two extraction commands: @fieldName and &fieldName. @fieldName is used to extract the value of that field, while &fieldName returns an NSDictionary describing that field, like returned with the -(NSDictionary* _Nullable) getField:(NSString* _Nonnull) name; method of XMPPDataForm.

Note: The implementation in XMPPDataForm.m has many useful methods for creating and working with XEP-0004 data-forms. Make sure to check out XMPPDataForm.h or the implementation in XMPPDataForm.m.

Note: An @fieldName extraction command can be used together with a conversion command, see example 6. Conversion commands are not allowed for &fieldName extraction commands or data-form queries not using an extraction command at all (e.g. returning the whole data-form).

Example 6:

<iq from='upload.montague.tld' id='step_02' to='romeo@montague.tld/garden' type='result'>
  <query xmlns='http://jabber.org/protocol/disco#info'>
    <identity category='store' type='file' name='HTTP File Upload' />
    <feature var='urn:xmpp:http:upload:0' />
    <x type='result' xmlns='jabber:x:data'>
      <field var='FORM_TYPE' type='hidden'>
        <value>urn:xmpp:http:upload:0</value>
      </field>
      <field var='max-file-size'>
        <value>5242880</value>
      </field>
    </x>
  </query>
</iq>
MLXMLNode* iqNode = <the stanza above as MLXMLNode tree>;
NSInteger uploadSize = [[iqNode findFirst:@"{http://jabber.org/protocol/disco#info}query/\\{urn:xmpp:http:upload:0}result@max-file-size\\|int"] integerValue];

MLAssert(uploadSize == 5242880, @"Extracted upload size should be 5242880 bytes!");

Some more data-form queries as found in our codebase:

  • {http://jabber.org/protocol/disco#info}query/\\{http://jabber.org/protocol/muc#roominfo}result@muc#roomconfig_roomname\\
  • {http://jabber.org/protocol/commands}command<node=urn:xmpp:invite#invite>/\\[0]@expire\\|datetime (the form-type and FORM_TYPE field value was omitted, the query matches every data-form)
  • {http://jabber.org/protocol/commands}command<node=urn:xmpp:invite#invite>/\\@expire\\|datetime (the form-type and FORM_TYPE field value was omitted, the query matches every data-form)

September 09, 2024 00:00

September 06, 2024

Erlang Solutions

Erlang Solutions announces latest business win with Razoyo to meet growing demand

Erlang Solutions, a global technology and consultancy service provider, is pleased to announce its latest customer win with Razoyo, a leading e-commerce consultancy and software development agency.

Razoyo needed urgent support and additional team members to handle sudden increased demand and extra client needs. They appointed Erlang Solutions for their expertise and ability to provide the right specialists fast, ensuring Razoyo meets their quick turnaround.

Commenting on the latest partnership, Mark Cowan, International Business Development Manager at Erlang Solutions said: ” We are excited to support our newest client Razoyo with our expert team and specialist resources. We look forward to working with them to deliver continued outstanding service.”

Paul Byrne, President at Razoyo, added “When faced with such high demand, outsourcing was key for us to manage this peak time in the business. Erlang Solutions stepped in swiftly with their expertise to meet our needs. We look forward to working with them to maintain our high standards of delivery.”   

Razoyo is an award-winning eCommerce and Development Agency serving the needs of medium and large-size businesses. They work with thousands of merchants each year to improve and expand their business.

With 26 years of expertise, Erlang Solutions is renowned for its world-leading consultants in Erlang, Elixir, and beyond. The company delivers efficient and reliable system solutions for some of the world’s most ambitious companies.

The post Erlang Solutions announces latest business win with Razoyo to meet growing demand appeared first on Erlang Solutions.

by Erlang Solutions Team at September 06, 2024 11:30