Planet Jabber

June 13, 2024

Erlang Solutions

Top 5 Tips to Ensure IoT Security for Your Business

In an increasingly tech-driven world, the implementation of IoT for business is a given. According to the latest data, there are currently 17.08 billion connected IoT devices– and counting. A growing number of devices requires robust IoT security to maintain privacy, protect sensitive data and prevent unauthorised access to connected devices.

A single compromised device can be a threat to an entire network. For businesses, it can lead to major financial losses, operational disruptions and a major impact on brand reputation. We will be taking you through the five key considerations to ensure IoT for businesses including data encryption methods, password management, IoT audits, workplace education and the importance of disabling unused features.

Secure password practices

Weak passwords make IoT devices susceptible to unauthorised access, leading to data breaches, privacy violations and increased security risks. When companies install devices, without changing default passwords or by creating oversimplified ones, they create a gateway entry point for attackers. Implementing strong and unique passwords can ensure the protection of these potential threats.

Password managers

Each device in a business should have its own unique password that should change on a regular basis. According to the 2024 IT Trends Report by JumpCloud, 83% of organisations surveyed use password-based authentication for some IT resources.

Consider using a business-wide password manager to store your passwords securely and that allows you to use unique passwords across multiple accounts. 

Password managers are also incredibly important as they:

  • Help to spot fake websites, protecting you from phishing scams and attacks.
  • Allow you to synchronise passwords across multiple devices, making it easy and safe to log in wherever you are.
  • Track if you are re-using the same password across different accounts for additional security.
  • Spot any password changes that could appear to be a breach of security.

Multi-factor authentication (MFA)

Multi-factor authentication (MFA) adds an additional layer of security. It requires additional verification beyond just a password, such as SMS codes, biometric data or other forms of app-based authentication. You’ll find that many password managers actually offer built-in MFA features for enhanced security.

Some additional security benefits include:

  • Regulatory compliance
  • Safeguarding without password fatigue
  • Easily adaptable to a changing work environment
  • An extra layer of security compared to two-factor authentication (2FA)

As soon as an IoT device becomes connected to a new network, it is strongly recommended that you reset any settings with a secure, complex password. Using password managers allows you to generate unique passwords for each device to secure your IoT endpoints optimally.

Data encryption at every stage

Why is data encryption so necessary? With the increased growth of connected devices, data protection is a growing concern. In IoT, sensitive information (personal data, financial, location etc) is vulnerable to cyber-attacks if transmitted over public networks. When done correctly, data encryption renders personal data unreadable to those who don’t have outside access. Once that data is encrypted, it becomes safeguarded, mitigating unnecessary risks. 

IoT security data encryption

Additional benefits to data encryption

How to encrypt data in IoT devices

There are a few data encryption techniques available to secure IoT devices from threats. Here are some of the most popular techniques:

Triple Data Encryption Standard (Triple DES): Uses three rounds of encryption to secure data, offering a high-level of security used for mission-critical applications.

Advanced Encryption Standard (AES): A commonly used encryption standard, known for its high security and performance. This is used by the US federal government to protect classified information.

Rivest-Shamir-Adleman (RSA): This is based on public and private keys, used for secure data transfer and digital signatures.

Each encryption technique has its strengths, but it is crucial to choose what best suits the specific requirements of your business.

Encryption support with Erlang/Elixir

When implementing data encryption protocols for IoT security, Erlang and Elixir offer great support to ensure secure communication between IoT devices. We go into greater detail about IoT security with Erlang and Elixir in a previous article, but here is a reminder of the capabilities that make them ideal for IoT applications:

  1. Concurrent and fault-tolerant nature: Erlang and Elixir have the ability to handle multiple concurrent connections and processes at the same time. This ensures that encryption operations do not bottleneck the system, allowing businesses to maintain high-performing, reliable systems through varying workloads. 
  2. Built-in libraries: Both languages come with powerful libraries, providing effective tools for implementing encryption standards, such as AES and RSA.
  3. Scalable: Both systems are inherently scalable, allowing for secure data handling across multiple IoT devices. 
  4. Easy integration: The syntax of Elixir makes it easier to integrate encryption protocols within IoT systems. This reduces development time and increases overall efficiency for businesses.

Erlang and Elixir can be powerful tools for businesses, enhancing the security of IoT devices and delivering high-performance systems that ensure robust encryption support for peace of mind.

Regular IoT inventory audits

Performing regular security audits of your systems can be critical in protecting against vulnerabilities. Keeping up with the pace of IoT innovation often means some IoT security considerations get pushed to the side. But identifying weaknesses in existing systems allows organisations to implement much- needed strategy.

Types of IoT security testing

We’ve explained how IoT audits are key in maintaining secure systems. Now let’s take a look at some of the common types of IoT security testing options available:

IoT security testing

IoT security testing types

Firmware software analysis

Firmware analysis is a key part of IoT security testing. It explores the firmware, the core software embedded into the IoT hardware of IoT products (routers, monitors etc). Examining the firmware means security tests can identify any system vulnerabilities, that might not be initially apparent. This improves the overall security of business IoT devices.

Threat modelling

In this popular testing method, security professionals create a checklist based on potential attack methods, and then suggest ways to mitigate them. This ensures the security of systems by offering analysis of necessary security controls.

IoT penetration testing

This type of security testing finds and exploits security vulnerabilities in IoT devices. IoT penetration testing is used to check the security of real-world IoT devices, including the entire ecosystem, not just the device itself.

Incorporating these testing methods is essential to help identify and mitigate system vulnerabilities. Being proactive and addressing these potential security threats can help businesses maintain secure IoT infrastructure, enhancing operational efficiency and data protection.

Training and educating your workforce

Employees can be an entry point for network threats in the workplace. 

The time of BYOD (bring your own devices) where an employee’s work supplies would consist of their laptops, tablets and smartphones in the office to assist with their tasks, is long gone. Now, personal IoT devices are also used in the workplace. Think of your popular wearables like smartwatches, fitness trackers, e-readers and portable game consoles. Even portable appliances like smart printers and smart coffee makers are increasingly popular in office spaces.

Example of increasing IoT devices in the office. Source: House of IT

The use of various IoT devices throughout your business network is the most vulnerable target for cybercrime, using techniques such as phishing and credential hacking or malware. 

Phishing attempts are among the most common. Even the most ‘tech-savvy’ person can fall victim to them. Attackers are skilled at making phishing emails seem legitimate, forging real domains and email addresses to appear like a legitimate business. 

Malware is another popular technique concealed in email attachments, sometimes disguised as Microsoft documents, unassuming to the recipient.

Remote working and IoT security

Threat or malicious actors are increasingly targeting remote workers. Research by Global Newswire shows that remote working increases the frequency of cyber attacks by a staggering 238%.

The nature of remote employees housing sensitive data on various IoT devices makes the need for training even more important. There is now a rise in companies moving to secure personal IoT devices that are used for home working, with the same high security as they would corporate devices.

How are they doing this? IoT management solutions. They provide visibility and control over other IoT devices. Key players across the IoT landscape are creating increasingly sophisticated IoT management solutions, helping companies administer and manage relevant updates remotely.

The use of IoT devices is inevitable if your enterprise has a remote workforce. 

Regular remote updates for IoT devices are essential to ensure the software is up-to-date and patched. But even with these precautions, you should be aware of IoT device security risks and take steps to mitigate them.

Importance of IoT training

Getting employees involved in the security process encourages awareness and vigilance for protecting sensitive network data and devices.

Comprehensive and regularly updated education and training are vital to prepare end-users for various security threats. Remember that a business network is only as secure as its least informed or untrained employee.

Here are some key points employees need to know to maintain IoT security:

  • The best practices for security hygiene (for both personal and work devices and accounts).
  •  Common and significant cybersecurity risks to your business.
  • The correct protocols to follow if they suspect they have fallen victim to an attack.
  • How to identify phishing, social engineering, domain spoofing, and other types of attacks.

Investing the time and effort to ensure your employees are well informed and prepared for potential threats can significantly enhance your business’s overall IoT security standing.

Disable unused features to ensure IoT security

Enterprise IoT devices come with a range of functionalities. Take a smartwatch, for example. Its main purpose as a watch is of course to tell the time, but it might also include Bluetooth, Near-Field Communication (NFC), and voice activation. If you aren’t using these features, then you’re opening yourself up for hackers to potentially breach your device. Deactivation of unused features reduces the risk of cyberattacks, as it limits the ways for hackers to breach these devices.

Benefits of disabling unused features

If these additional features are not being used, they can create unnecessary security vulnerabilities. Disabling unused features helps to ensure IoT security for businesses in several ways:

  1. Reduces attack surface: Unused features provide extra entry points for attackers. Disabling features limits the number of potential vulnerabilities that could be exploited, in turn reducing attacks overall.
  2. Minimises risk of exploits: Many IoT devices come with default settings that enable features which might not be necessary for business operations. Disabling these features minimises the risk of weak security.
  3. Improves performance and stability: Unused features can consume resources and affect the performance and stability of IoT devices. By disabling them, devices run more efficiently and are less likely to experience issues that could be exploited by attackers.
  4. Simplifies security management: Managing fewer active features simplifies security oversight. It becomes simpler to monitor and update any necessary features.
  5. Enhances regulatory compliance: Disabling unused features can help businesses meet regulatory requirements by ensuring that only the necessary and secure functionalities are active.

To conclude

The continued adoption of IoT is not stopping anytime soon. Neither are the possible risks. Implementing even some of the five tips we have highlighted can significantly mitigate the risks associated with the growing number of devices used for business operations.

Ultimately, investing in your business’s IoT security is all about safeguarding the entire network, maintaining the continuity of day-to-day operations and preserving the reputation of your business. You can learn more about our current IoT offering by visiting our IoT page or contacting our team directly.

The post Top 5 Tips to Ensure IoT Security for Your Business appeared first on Erlang Solutions.

by Erlang Solutions Team at June 13, 2024 11:01

June 10, 2024

Gajim

Gajim 1.9.0

Half a year after the last release, Gajim 1.9.0 is finally here. 🎉 This release brings long awaited support for message replies and message reactions. Message Moderation has been improved as well. Say hello to voice messages! Thank you for all your contributions!

What’s New

It took us quite some time, but now it’s here: Gajim 1.9 comes with a complete database overhaul, which enables new features such as Message Replies and Message Reactions.

Message Replies (XEP-0461: Message Replies) offer rich context, which wasn’t available previously when using message quotes. With Message Replies, Gajim shows you the author’s profile picture, nickname, and also the time the message was sent. Clicking a referenced message will jump to the original message.

Message Replies in Gajim 1.9

Message Replies in Gajim 1.9

Message Reactions (XEP-0444: Message Reactions) allow you to react to messages by using an emoji of your choice. When hovering messages, a floating action menu appears. This action menu offers three quick reactions and even more when clicking on the plus button. Hovering a reaction shows a tooltip containing infos about who sent which reaction - especially useful in group chats.

Message Reactions in Gajim 1.9.0

Message Reactions in Gajim 1.9.0

Message Moderation (XEP-0425: Moderated Message Retraction) has been updated to the latest version while staying compatible with older implementations, thus improving Gajim’s tools against spam.

The new database backend is based on SQLAlchemy and allows us to easily adapt to new requirements of upcoming standards, for example message retraction and rich file transfers.

Thanks to our contributor @mesonium, who brought audio previews to Gajim a year ago, Gajim is now able to record voice messages.

Voice message recording in Gajim 1.9.0

Voice message recording in Gajim 1.9.0

What else changed:

  • Gajim’s message input now offers proper undo/redo functionalities
  • Messages containing only an emoji are now displayed larger
  • Message merging has been improved
  • Notifications now show icons (e.g. a user’s profile picture) in more desktop environments
  • Your connection state is now shown directly above the message input
  • Group chat messages are displayed as ‘pending’ until they have been acknowledged by the server
  • Group chat avatars can now be removed
  • The main menu can now be toggled via Ctrl+M
  • ‘Start Chat’ now shows contact list groups, status messages, and more
  • Issues with using the Ctrl+C shortcut for copying message content have been fixed

This release also comes with many bugfixes. Have a look at the changelog for a complete list.

Gajim

As always, don’t hesitate to contact us at gajim@conference.gajim.org or open an issue on our Gitlab.

June 10, 2024 00:00

June 06, 2024

Erlang Solutions

10 Unusual Blockchain Use Cases

When Blockchain technology was first introduced with Bitcoin in 2009, no one could have foreseen its impact on the world or the unusual cases of blockchain that have emerged. Fast forward to now and Blockchain has become popular for its ability to ensure data integrity in transactions and smart contracts. 

Thanks to its cost-effectiveness,  transparency, speed and top security, it has found its way into many industries, with blockchain spending expected to reach $19 billion this year.

In this post, we will be looking into 10 use cases that have caught our attention, in industries benefiting from Blockchain in unusual and impressive ways.

Ujo Music- Transforming payment for artists

Let’s start exploring the first unusual use case for blockchain with Ujo Music

Ujo Music started with a mission to get artists paid fairly for their music, addressing the issues of inadequate royalties from streaming and complicated copyright laws.

To solve this, they turned to blockchain technology, specifically Ethereum. In using this, Ujo Music was able to create a community that allowed music owners to automatically receive royalty payments. Artists were also able to retain these rights due to smart contracts and cryptocurrencies. This approach allowed artists to access their earnings instantly, without the need to incur fees or have the wait time associated with more traditional systems. 

As previously mentioned, Blockchain also allows for transparency and security, which is key in preventing theft and copyright infringement of the owner’s information. Ujo Music is transforming the payment landscape for artists in the digital, allowing for better management and rights over their music.

Cryptokitties-Buying virtual cats and gaming

For anyone looking to collect and breed digital cats in 2017, Cryptokitties was the place to be. While the idea of a cartoon crypto animation seems incredibly niche, the initial Cryptokitties craze is one that cannot be denied in the blockchain space.

Upon its launch, it immediately went viral, with the alluring tagline “The world’s first Ethereum game.” According to nonfungible.com, the NFT felines saw sales volume spike from just 1,500 on launch day, to 52,000 by the of 2017.

CryptoKitties was among the first projects to harness smart contracts by attaching code to data constructs called tokens on the Ethereum blockchain. Each chunk of the game’s code (which it refers to as a “gene”) describes the attributes of a digital cat. Players buy, collect, sell, and even breed new felines. 

Source: Dapper Labs

Just like individual Ethereum tokens and bitcoins, the cat’s code also ensures that the token representing each cat is unique, which is where the nonfungible token, or NFT, comes in. A fungible good is, by definition, one that can be replaced by an identical item—one bitcoin is as good as any other bitcoin. An NFT, by contrast, has a unique code that applies to no other NFT.

Blockchain can be used in gaming in general by creating digital and analogue gaming experiences. By investing in CryptoKitties, players could invest, build and extend their gaming experience.

ParagonCoin for Cannabis

Our next unusual blockchain case stems from legal cannabis. 

Legal cannabis is a booming business, expected to be worth $1.2 billion by the end of 2024. With this amount of money, a cashless solution offers business owners further security. Transactions are easily trackable and offer transparency and accountability that traditional banking doesn’t.

ParagonCoin business roadmap

Transparency in the legal cannabis space is key for businesses looking to challenge its negative image. ParagonCoin, a cryptocurrency startup had a unique value proposition for its entire ecosystem, making it clear that its business would be used for no illegal activity.

Though recently debunked, ParagonCoin was a pioneer in its field in utilising B2B payments. At the time of its launch, paying for services was only possible with cash, as businesses that were related to cannabis were not allowed to officially have a bank account. 

This creates a dire knock-on effect, making it difficult for businesses to pay for solicitors, staff and other operational costs. The only ways to get an operation running would have been unsafe, inconvenient and possibly illegal. ParagonCoin remedied this by asking businesses to adopt a pseudo-random generator (PRG) payment system to answer the immediate issues. 

Here are some other ways ParagonCoin adopted blockchain technology in their cannabis industry:

  • Regulatory compliance– Simplifying compliance issues on a local and federal level.
  • Secure transactions– Utilising smart contracts to automate and enforce agreement terms, reducing the risk of fraud.
  • Decentralised marketplace– Creating a platform for securely listing and reviewing products and services, while fostering a community of engaged users, businesses and regulators.
  • Innovative business models– The facilitating of crowdfunding to transparently raise business capital.

These cases highlight blockchain technologies’ ability to enhance transparency, compliance, and security, within even the most unexpected industries.

Siemens partnership- Sharing solar power

Siemens has partnered with startup LO3 Energy with an app called Brooklyn Microgrid. This allows residents of Brooklyn who own solar panels to transfer their energy to others who don’t have this capability. Consumers and solar panel owners are in control of the entire transaction.

Residents with solar panels sell excess energy back to their neighbours, in a peer-to-peer transaction. If you’d like to learn more about the importance of peer-to-peer (p2p) networks, you can check out our post about the Principles of Blockchain.

Microgrids reduce the amount of energy that gets lost during transmission. It provides a more efficient alternative since approximately 5% of electricity generated in the US is lost in transit. The Brooklyn microgrid not only minimises these losses but also offers economic benefits to those who have installed solar panels, as well as the local community.

Björn Borg and same-sex marriage

Same-sex marriage is still banned in a majority of countries across the world. With that in mind, the Swedish sportswear brand Björn Borg discovered an ingenious way for loved ones to be in holy matrimony, regardless of sexual orientation on the blockchain. But how?

Blockchain is stereotypically linked with money, but remove those connotations and you have an effective ledger that can record events as well as transactions.

Björn Borg has put this loophole to extremely good use by forming the digital platform Marriage Unblocked, where you can propose, marry and exchange vows all on the blockchain. What’s more, the records can be kept anonymous offering security for those in potential danger, and you get the flexibility of smart contracts.

Of course, you can request a certificate to display proudly too!

Whilst this doesn’t hold any legal requirements, everything is produced and stored online. If religion or government isn’t a primary concern of yours, where’s the harm in a blockchain marriage?

Tangle -Simplifying the Internet of Things (IoT)

Blockchain offers ledgers that can record the huge amounts of data produced by IoT systems. Once again the upside is the level of transparency it offers that simply cannot be found in other services.

The Internet of Things is one of the most exciting elements to come out of technology. The connected ecosystems can record and share various interactions. Blockchain lends itself perfectly to this, as it can transfer data and give identification for both public and private sector use cases. Here is an example:

Public sector- Infrastructure management, taxes (and other municipal services).

Private sector -logistical upgrade, warehousing tracking, greater efficiency, and enhanced data capabilities.

IOTA’s Tangle is a blockchain specifically for IoT which handles machine-to-machine micropayments. It has reengineered distributed ledger technology (DLT), enabling the secure exchange of both value and data.

Tangle is the data structure behind micro-transaction crypto tokens that are purposely optimised and developed for IoT. It differs from other blockchains and cryptocurrencies by having a much lighter, more efficient way to deal with tens of billions of devices. 

It includes a decentralised peer-to-peer network that relies on a Distributed Acyclic Graph (DAG), which creates a distributed ledger rather than “blocks”. There are no transaction fees, no mining, and no external consensus process. This also secures data to be transferred between digital devices.

Walmart and IBM- Improving supply chains

Blockchain’s real-time tracking is essential for any company with a significant number of supply chains. 

Walmart partnered with IBM to produce a blockchain called Hyperledger Fabric blockchain to track foods from the supplier to the shop shelf. When a food-borne disease outbreak occurs, it can take weeks to find the source. Better traceability through blockchain helped save time and lives, allowing companies to act fast and protect affected farms.

Walmart chose blockchain technology as the best option for a decentralised food supply ecosystem. With IBM, they created a food traceability system based on Hyperledger Fabric. 

The food traceability system built for the two products worked and Walmart can now trace the origin of over 25 products from five of its different suppliers using this system.

Agora for elections and voter fraud

Voting on a blockchain offers full transparency, and reduces the chance of voter fraud. A prime example of this is in Sierra Leone, which in 2018 became the first country to run a blockchain-based election, with 70% of the pollers using the technology to anonymously store votes in an immutable ledger. 

Sierra Leone results on the Agora blockchain

These results were placed on Agora’s blockchain and by allowing anyone to view it, the government aimed to provide a level of trust with its citizens. The platform reduced controversy and costs enquired when using paper ballots. 

The result of this is a trustworthy and legitimate result that will also limit the amount of the hearsay from opposition voters and parties, especially in Sierra Leone which has had heavy corruption claims in the past.

MedRec and Dentacoin Healthcare

With the emphasis on keeping many records in a secure manner, blockchain lends itself nicely to medical records and healthcare.

MedRec is one business using blockchain to keep secure files of medical records by using a decentralised CMS and smart contracts. This also allows transparency of data and the ability to make secure payments connected to your health. Blockchain can also be used to track dental care in the same sort of way.

One example is Dentacoin, which uses the global token ERC20. It can be used for dental records but also to ensure dental tools and materials are sourced appropriately, whether tools are used on the correct patients, networks that can transfer information to each other quickly and a compliance tool.

Everledger- Luxury items and art selling

Blockchain’s ability to track data and transactions lends itself nicely to the world of luxury items.

Everledger.io is a blockchain-based platform that enhances transparency and security in supply chain management. It’s particularly used for high-value assets such as diamonds, art, and fine wines. 

The platform uses blockchain technology to create a digital ledger that records the provenance and lifecycle of these assets, ensuring authenticity and preventing fraud. Through offering a tamper-proof digital ledger, Everledger allows stakeholders to trace the origin and ownership history of valuable assets, reducing the risk of fraud and enhancing overall market transparency.

The diamond industry is a great use case of the Everledger platform. 

By recording each diamond’s unique attributes and history on an immutable blockchain, Everledger provides a secure and transparent way to verify the authenticity and ethical sourcing of diamonds. This helps in combating the circulation of conflict diamonds but also builds consumer trust by providing a verifiable digital record of each diamond’s journey from mine to market.

To conclude

While there is a buzz around blockchain, it’s important to note that the industry is well-established, and these surprising cases of blockchain display the broad and exciting nature of the industry as a whole. There are still other advantages to blockchain that we haven’t delved into in this article, but we’ve highlighted one of its greatest advantages for businesses and consumers alike- its transparency.
If you or your business are working on an unusual blockchain case, let us know – we would love to hear about it! Also if you are looking for reliable FinTech or blockchain experts, give us a shout, we offer many services to fix issues of scale.

The post 10 Unusual Blockchain Use Cases appeared first on Erlang Solutions.

by Erlang Solutions Team at June 06, 2024 10:55

ProcessOne

Understanding messaging protocols: XMPP and Matrix

In the world of real-time communication, two prominent protocols often come into discussion: XMPP and Matrix. Both protocols aim to provide robust and secure messaging solutions, but they differ in architecture, features, and community adoption. This article delves into the key differences and similarities between XMPP and Matrix to help you understand which might be better suited for your needs.

What is XMPP?

Overview

XMPP (Extensible Messaging and Presence Protocol) is an open-standard communication protocol originally developed for instant messaging (IM). It was designed as the Jabber protocol in 1999 to aggregate communication across a number of options, such as ICQ, Yahoo Messenger, and MSN. It was standardized by the IETF as RFC 3920 and RFC 3921 in 2004, and later revised as RFC 6120 and RFC 6121 in 2011.

Key Features

  • Decentralized Architecture: XMPP operates on a decentralized network of servers. The protocol is said to be federated. The network of all interconnected XMPP servers is called the XMPP federation.
  • Extensibility: The protocol is highly extensible through XMPP Extension Protocols (XEPs). There are currently more than 400 extensions covering a broad range of use cases like social networking and Internet of Things features through PubSub extensions, Groupchat (aka MUC, Multi-user chat), and VoIP with the Jingle protocol.
  • Security: Supports TLS for encryption and SASL for authentication. End-to-end encryption is available through the OMEMO extension.
  • Interoperability: Widely adopted with numerous clients and servers available.
  • Gateways: Built-in support for gateways to other protocols, allowing for communication across different messaging systems.

Network Protocol Design

  • TCP-Level Stream Protocol: XMPP is based on a TCP-level stream protocol using XML and namespaces. This extensibility while maintaining schema consistency is key. It can also run on top of other protocols like WebSocket or HTTP through the concept of binding.

Use Cases

  • Instant messaging
  • Presence information
  • Multi-user chat (MUC)
  • Social networks
  • Voice and video calls (with extensions)
  • Internet of Things
  • Massive messaging (massive scale messaging platforms like WhatsApp)

What is Matrix?

Overview

Matrix is an open standard protocol for real-time communication, designed to provide interoperability between different messaging systems. It was introduced in 2014 by the Matrix.org Foundation.

Key Features

  • Decentralized Architecture: Like XMPP, Matrix is also decentralized and supports a federated model.
  • Event-Based Model: Uses an event-based architecture where all communications are stored in a distributed database. The conversations are replicated on all servers in the federation that participate in the discussion.
  • End-to-End Encryption: Built-in end-to-end encryption using the Olm and Megolm libraries.
  • Bridging: Strong focus on bridging to other communication systems like Slack, IRC, and XMPP.

Network Protocol Design

  • HTTP-Based Protocol: Matrix uses HTTP for communication and JSON for its data structure, making it suitable for web environments and easy to integrate with web technologies.

Use Cases

  • Instant messaging
  • VoIP and video conferencing
  • Bridging different chat systems

Detailled Comparison

Architecture

  • XMPP: Uses a federated model to build a network of communication that works for both messaging and social networking. The content is not duplicated by default.
  • Matrix: Uses a federated model where each server stores a complete history of conversations, allowing for decentralized control and redundancy.

XMPP is built around an event-based architecture to reach the largest possible scale. Matrix is built around a distributed model that may be more appealing to smaller community servers. As the conversations are distributed, it can cope more easily with servers suffering from frequent disconnections in the federated network.

Extensibility

  • XMPP: Extensible through XEPs that are standardized by the XMPP Standards Foundation, allowing for a wide variety of additional features. As the protocol is based on XML, it can also be extended for custom client features, using your own namespace. The XML schema can be used to define your extension data structure.
  • Matrix: Extensible through modules and APIs, with a strong focus on bridging to other protocols. It is extensible as well and allows custom events and custom properties.

Security

  • XMPP: Supports TLS for secure communication and SASL for authentication. End-to-end encryption is available through extensions like OMEMO.
  • Matrix: Supports TLS for secure communication. Built-in end-to-end encryption using Olm and Megolm, providing robust security out of the box.

Both end-to-end encryption approaches are similar, as they are both based on the same double ratchet encryption algorithm made popular by the Signal messaging platform.

Interoperability

  • XMPP: Known for its interoperability due to its long-standing presence and wide adoption. Includes built-in support for gateways to other protocols.
  • Matrix: Designed with interoperability in mind, with native support for bridging to other protocols. More recent gateways are available. They could be ported to work on both protocols (which would be neat).

Scalability

  • XMPP: By design, XMPP has an edge in terms of scalability. XMPP is event-based and works as a broadcast hub for messages, making it efficient in handling a large number of concurrent users. It is proven to sustain millions of concurrent users.
  • Matrix: Matrix maps conversations to documents that are replicated across servers involved in the discussion. This means the document state needs to be merged and reconciled for each new posted message, which incurs significant overhead in terms of processing power, memory, and storage. Its use case is mainly “organization level” chat, supporting thousands of users, not millions.

Community and Adoption

  • XMPP: Established and widely adopted with a large number of client and server implementations. This can be seen as a drawback, leading to intimidating choices of tools. However, this has proven to be a strength with many competing implementations that have proven to be interoperable. This is a validation of the robustness of the protocol. Initially developed by Jeremy Miller, he cocreated Jabber, Inc to support the first server. The company was later acquired by Cisco. It is now an Internet Engineering Task Force standard used for massive scale deployments and a protocol drive by the non-profit XMPP Standard Foundation.
  • Matrix: Rapidly growing community with increasing adoption, particularly in open-source projects and decentralized applications. The main implementation is developed by Element, the company funded to grow the Matrix protocol.

Conclusion

Both XMPP and Matrix offer robust solutions for real-time communication with their own strengths. XMPP’s long history, extensibility, and efficient scalability make it a reliable choice for traditional instant messaging and presence-based applications, but also social networks, Internet of Things, and workflows that mix human users and devices. On the other hand, Matrix’s architecture, built-in end-to-end encryption, and focus on gateway development make it an excellent choice for those looking to integrate multiple communication systems or require secure corporate messaging through the Element client.

Using a server like ejabberd is a future-proof approach, as it is multiprotocol by design. ejabberd supports XMPP, MQTT, SIP, can act as a VoIP and video call proxy (STUN/TURN), and can federate with the Matrix network. It is likely to support the Matrix client protocol as well in beta in the near future.

Choosing between XMPP and Matrix depends largely on your specific needs, existing infrastructure, and future scalability requirements. Both protocols continue to evolve, offering exciting possibilities for real-time communication.


Mistakes? If you spot a mistake, please reach out to share it! Thanks! I would like this document to be as accurate as possible.

The post Understanding messaging protocols: XMPP and Matrix first appeared on ProcessOne.

by Mickaël Rémond at June 06, 2024 08:04

The XMPP Standards Foundation

The XMPP Newsletter May 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of May 2024.

XMPP and Google Summer of Code 2024

The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and will kick-off with coding now:

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

XMPP Videos

Debian and XMPP in Wind and Solar Measurement talk at MiniDebConf Berlin 2024.

XMPP Articles

XMPP Software News

XMPP Clients and Applications

XMPP Servers

XMPP Web as Openfire plugin

XMPP Web as Openfire plugin

XMPP Libraries & Tools

Slixfeed News Bot

Slixfeed News Bot

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEP was proposed this month.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 2.5.0 of XEP-0030 (Service Discovery)
    • Add note about some entities not advertising the feature. (pep)
  • Version 1.34.6 of XEP-0045 (Multi-User Chat)
    • Remove contradicting keyword on sending subject in §7.2.2. (pep)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • XEP-0421: Anonymous unique occupant identifiers for MUCs
  • XEP-0440: SASL Channel-Binding Type Capability

Stable

  • Version 1.0.0 of XEP-0398 (User Avatar to vCard-Based Avatars Conversion)
    • Accept as Stable as per Council Vote from 2024-04-30. (XEP Editor (dg))

Deprecated

  • No XEP deprecated this month.

Rejected

  • No XEP rejected this month.

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, singpolyma, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola
  • Spanish: [xmpp.org/categories/newsletter/]
    • Translators: Gonzalo Raúl Nemmi

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

June 06, 2024 00:00

June 02, 2024

Remko Tronçon

Packaging Swift apps for Alpine Linux

While trying to build my Age Apple Secure Enclave plugin, a small Swift CLI app, on Alpine Linux, I found out that Swift itself doesn’t build against musl, nor is it able to create musl binaries. This means that Swift programs don’t run on Alpine. The assumption that Linux implies glibc apparently runs deep into the Swift internals, so although some work is being done in this area, I suspect musl support isn’t going to land soon. So, I started to look for alternatives for getting my Swift app on Alpine.

by Remko Tronçon at June 02, 2024 00:00

May 30, 2024

Erlang Solutions

7 Key Blockchain Principles for Business

Welcome to the final instalment of our Blockchain series. Here, we are taking a look at the seven fundamental principles that make blockchain: Immutability, decentralisation 

‘workable’ consensus, distribution and resilience, transactional automation (including ‘smart contracts’), transparency and trust, and links to the external world.

For business leaders, understanding these core principles is crucial in harnessing the potential for building trust, spearheading innovation and driving overall business efficiency. 

If you missed the previous blog, feel free to learn all about the strengths of Erlang and Elixir in blockchain here.

Now let’s discuss how these seven principles can be leveraged to transform business operations.

Understanding the Core Concepts

In a survey conducted by EY, over a third (38%) of US workers surveyed said that blockchain technology is widely used within their businesses. A further 44% said the tech would be widely used within three years and 18% reported that they were still a few years away from being widely used within their business.

To increase the adoption of blockchain, it is key to understand its principles, how it operates, and the advantages it offers across various industries, such as financial services, retail, advertising and marketing, and digital health.

Immutability

In an ideal world, we would want to keep an accurate record of events and make sure it doesn’t degrade over time due to natural events, human error, or fraud. While physical items can change over time, digital information can be continuously corrected to prevent deterioration.

Implementing an immutable blockchain aims to maintain a digital history that remains unaltered over time. This is especially useful for businesses when it comes to assessing the ownership or the authenticity of an asset or to validate one or more transactions. In the context of legalities and business regulation, having an immutable record of transactions is key as this can save time and resources by streamlining these processes.

In a well-designed blockchain, data is encoded using hashing algorithms. This ensures that only those with sufficient information can verify a transaction. This is typically implemented on top of Merkle trees, where hashes of combined hashes are calculated.

Merkle tree or hash tree

In a well-designed blockchain, data is encoded using hashing algorithms. This ensures that only those with sufficient information can verify a transaction.

Challenges raised by business leaders

Legitimate questions can be raised by business leaders about storing an immutable data structure:

  • Scalability: How is the increasing volume of data handled once it surpasses ledger capacities?
  • Impact of decentralisation: What effect does growing data history and validation complexity have on decentralisation and participant engagement?
  • Performance verification: How does verification degrade as data history expands, particularly during peak usage?
  • Risk mitigation: How can we ensure consensus and prevent fragmented networks or unauthorised forks in transaction history?

Businesses face challenges in managing growing data, maintaining decentralisation, verifying transactions, and preventing risks in immutable data storage. Meeting regulations also add complexity, and deciding what data to store must consider sensitivity.

Addressing regulatory challenges

Compliance with GDPR introduces challenges, especially concerning the “right to be forgotten.” This is important because fines for breaches of GDPR are potentially very severe for non-compliance. The solutions introduced so far effectively aim at anonymising the information that enters the immutable on-chain storage process, while sensitive information is stored separately in support databases where this information can be deleted if required. 

The challenge lies in determining upfront what information is considered sensitive and suitable for inclusion in the immutable record.. A wrong choice has the potential to backfire at a later stage if any involved actor manages to extract or trace sensitive information through immutable history.

Immutability in blockchain technology provides a solution to preserving accurate historical records, ensuring the authenticity and ownership of assets, streamlining transaction validation, and saving businesses time and resources. But it also has its challenges, such as managing data volumes, maintaining decentralisation, and ensuring it is complying with regulations, for example, GDPR. Despite these challenges, businesses can leverage immutable blockchain technology to modernise record-keeping practices and uphold the integrity of their operations.

Decentralisation of control

Remember the 2008 financial crash? One of the reactions following this crisis was against over-centralisation. 

In response to the movement towards decentralisation, businesses have acknowledged the potential for innovation and adaptation. Embracing decentralisation not only aligns with consumer values of independence and democratic fairness, but it also presents opportunities for businesses to explore new markets and develop innovative products and services, as well as implement decentralised governance models within their own organisations.

Use cases for decentralisation

There are many ways in which businesses can leverage blockchain technology in order to embrace decentralisation and unlock new growth opportunities:

Decentralised finance (DeFi): DeFi platforms leverage blockchain technology to provide financial services without the need for intermediaries, such as banks or brokerages.

Supply chain management: By recording every transaction on a blockchain ledger, businesses can track the movement of goods from the point of origin to the end consumer. 

Smart contracts: Automatically enforce and execute contractual agreements when predefined conditions are met, also without the need for intermediaries. 

Tokenisation of assets: Businesses can turn their assets into digital tokens. This helps split ownership into smaller parts, making it easier to buy and sell, and allowing direct trading between people without intermediaries.

Identity management: Blockchain-based identity management systems offer secure and decentralised solutions. Businesses can use blockchain to verify the identity of customers, employees, and partners while giving people greater control over their data. 

Data management and monetisation: Blockchain allows for businesses to securely manage and monetise data by giving individuals control over their data, facilitating direct transactions between data owners and consumers. 

Further considerations of decentralisation

With full decentralisation, there is no central authority to resolve potential transactional issues. Traditional, centralised systems have well-developed anti-fraud and asset recovery mechanisms which people have become used to. 

Using new, decentralised technology places a far greater responsibility on the user if they are to receive all of the benefits of the technology, forcing them to take additional precautions when it comes to handling and storing their digital assets.

There has no point in having an ultra-secure blockchain if one then hands over one’s wallet private key to an intermediary whose security is lax: it’s like having the most secure safe in the world and then writing the combination on a whiteboard in the same room.

Decentralisation, security, and usability

For businesses, embracing decentralisation unlocks new opportunities while posing challenges in security and usability. Balancing these factors is key as businesses continue to navigate decentralised technologies, shaping the future of commerce and industry. 

Businesses must consider whether the increased level of personal responsibility associated with secure blockchain implementation is a price users are willing to pay, or if they will trade off some security for ease of use and potentially more centralisation.

Workable Consensus

As businesses are increasingly pushing towards decentralised forms of control and responsibility, it has since been brought to light the fundamental requirement to validate transactions without a central authority; known as the ‘consensus’ problem. The blockchain industry has seen various approaches emerge to address this, with some competing and others complementing each other.

There’s been a lot of attention on governance in blockchain ecosystems. This involves regulating how quickly new blocks are added to the chain and the rewards for miners (especially in proof-of-work blockchains). Overall, it’s crucial to set up incentives and deterrents so that everyone involved helps the chain grow healthily.

Besides serving as an economic deterrent against denial of service and spam attacks, Proof of Work (POW) approaches are amongst the first attempts to automatically work out, via the use of computational power, which ledgers/actors have the authority to create/mine new blocks. Similar approaches (proof of space, proof of bandwidth etc) have followed, but all of them are vulnerable to deviations from the intended fair distribution of control.

Proof of work algorithm

How do these methods benefit businesses? It gives them an edge by purchasing powerful hardware in bulk and running it in areas with cheaper electricity. This can help to outpace competitors in mining new blocks and gaining control, ultimately centralising authority. 

In response to the challenges brought on by centralised control and environmental concerns associated with traditional mining methods, alternative approaches such as Proof of Stake (POS) and Proof of Importance (POI) have emerged. These methods remove the focus from computing resources and tie authority to accumulated digital asset wealth or participant productivity. However, implementing POS and POI while mitigating the risk of power and wealth concentration could present significant challenges for developers and business leaders alike.

Distribution and resilience

Apart from decentralising authority, control and governance, blockchain solutions typically embrace a distributed peer-to-peer (P2P) design paradigm. 

This preference is motivated by the inherent resilience and flexibility that these types of networks have introduced and demonstrated, particularly in the context of file and data sharing. A centralised network, typical of mainframes and centralised services is exposed to a ‘single point of failure’ vulnerability as the operations are always routed towards a central node.

If the central node breaks down or is congested, all the other nodes will be affected by disruptions. In a business context, decentralised and distributed networks attempt to reduce the detrimental effects that issues occurring on a node might trigger on other nodes. In a decentralised network, the failure of a node can still affect several neighbouring nodes that rely on it to carry out their operations. In a distributed network the idea is that the failure of a single node should not impact significantly any other node. Even when one preferential/optimal route in the network becomes congested or breaks down entirely, a message can still reach the destination via an alternative route. 

This greatly increases the chance of keeping a service available in the event of failure or malicious attacks such as a denial of service (DOS) attack. Blockchain networks with a distributed ledger redundancy are known for their resilience against hacking, especially when it comes to very large networks, such as Bitcoin. In such a highly distributed network, the resources needed to generate a significant disruption are very high, which not only delivers on the resilience requirement but also works as a deterrent against malicious attacks (mainly because the cost of conducting a successful malicious attack becomes prohibitive).

Although a distributed topology can provide an effective response to failures or traffic spikes, businesses need to be aware that delivering resilience against prolonged over-capacity demands or malicious attacks requires adequate adapting mechanisms. While the Bitcoin network is well positioned, as it currently benefits from a high capacity condition (due to the historically high incentive to purchase hardware by third-party miners), this is not the case for other emerging networks as they grow in popularity. This is where novel instruments, capable of delivering preemptive adaptation combined with back pressure throttling applied to the P2P level, can be of great value.

Distributed systems are not new and, whilst they provide highly robust solutions to many enterprise and governmental problems, they are subject to the laws of physics and require their architects to consider the trade-offs that need to be made in their design and implementation (e.g. consistency vs availability).

Automation

A high degree of automation is required for businesses to sustain a coherent, fair and consistent blockchain and surrounding ecosystem. Existing areas with a high demand for automation include those common to most distributed systems. For example; deployment, elastic topologies, monitoring, recovery from anomalies, testing, continuous integration, and continuous delivery. 

For blockchains, these represent well-established IT engineering practices. Additionally, there is a creative R&D effort to automate the interactions required to handle assets, computational resources and users across a range of new problem spaces (e.g. logistics, digital asset creation and trading).

The trend of social interactions has seen a significant shift towards scripting for transactional operations. This is where smart contracts and constrained virtual machine (VM) interpreters have emerged – an effort pioneered by the Ethereum project

Many blockchain enthusiasts are drawn to the ability to set up asset exchanges, specifying conditions and actions triggered by certain events. Smart contracts find various applications in lotteries, digital asset trading, and derivative trading. However, despite the exciting potential of smart contracts, getting involved in this area requires a significant level of expertise. Only skilled developers who are willing to invest time in learning Domain Specific Languages (DSL) can create and modify these contracts.

The challenge is to respond to safety and security concerns when smart contracts are applied to edge case scenarios that deviate from the ‘happy path’. If badly designed contracts cannot properly roll back or undo a miscarried transaction, their execution might lead to assets being lost or erroneously handed over to unwanted receivers.

Automation and governance

Another area in high need of automation is governance. Any blockchain ecosystem of users and computing resources requires periodic configurations of the parameters to carry on operating coherently and consensually. This results in a complex exercise of tuning for incentives and deterrents to guarantee the fulfilment of ambitious collaborative and decentralised goals. The newly emerging field of ‘blockchain economics’ (combining economics; game theory; social science and other disciplines) remains in its infancy.

The removal of a central ruling authority produces a vacuum that needs to be filled by an adequate decision-making body, which is typically supplied with automation that maintains a combination of static and dynamic configuration settings. Those consensus solutions referred to earlier which use computational resources or social stackable assets to assign the authority, not only to produce blocks but also to steer the variable part of governance, have succeeded in filling the decision-making gap in a fair and automated way. Successively, the exploitation of flaws in the static element of governance has hindered the success of these models. This has contributed to the rise in popularity of curated approaches such as POA or DPOS, which not only bring back centralised control but also reduce the automation of governance.

This a major area of evolution in blockchain where we expect to see major widespread market adoption.

Transparency and trust

For businesses to produce the desired audience engagement for blockchain and eventual mass adoption and success, consensus and governance mechanisms need to operate transparently. Users need to know who has access to what data so that they can decide what can be stored and possibly shared on-chain. These are the contractual terms by which users agree to share their data. As previously discussed, users might be required to exercise the right for their data to be deleted, which typically is a feature delivered via auxiliary, ‘off-chain’ databases. In contrast, only hashed information, effectively devoid of its meaning, is preserved permanently on-chain.

Given the immutable nature of the chain history, it is important to decide upfront what data should be permanently written on-chain and what gets written off-chain. The users should be made aware of what data gets stored on-chain and with whom it could potentially be shared. Changing access to on-chain data or deleting it goes against the fundamentals of immutability and therefore is almost impossible. Getting that decision wrong at the outset can significantly affect the cost and usability (and therefore likely adoption) of the particular blockchain in question.

Besides transparency, trust is another critical feature that users and customers legitimately seek. Trust has to go beyond the scope of the people involved as systems need to be trusted as well. Every static element, such as an encryption algorithm, the dependency on a library, or a fixed configuration, is potentially exposed to vulnerabilities.

Link to the external world

The attractive features that blockchain has brought to the internet market would be limited to handling digital assets unless there was a way to link information to the real world. Embracing blockchain solely within digital boundaries may diminish its appeal, as businesses seek solutions that integrate seamlessly with the analogue realities of our lives.

Technologies used to overcome these limitations include cyber-physical devices such as sensors for input and robotic activators for output, and in most circumstances, people and organisations. As we read through most blockchain white papers, we occasionally come across the notion of the Oracle, which in short, is a way to name an input coming from a trusted external source that could potentially trigger/activate a sequence of transactions in a Smart Contract or which can otherwise be used to validate some information that cannot be validated within the blockchain itself.

Blockchain oracles connecting blockchains to inputs and outputs

Bitcoin and Ethereum, still the two dominant projects in the blockchain space are viewed by many investors as an opportunity to diversify a portfolio or speculate on the value of their respective cryptocurrencies. The same applies to a wide range of other cryptocurrencies except fiat pegged currencies, most notably Tether, where the value is effectively bound to the US dollar. Conversions from one cryptocurrency to another and to/from fiat currencies are normally operated by exchanges on behalf of an investor. These are again peripheral services that serve as a link to the external physical world. For businesses, these exchanges provide crucial services that facilitate investment and trading activities, contributing to the broader ecosystem of blockchain-based assets.

Besides oracles and cyber-physical links, interest is emerging in linking smart contracts together to deliver a comprehensive solution. Contracts could indeed operate in a cross-chain scenario to offer interoperability among a variety of digital assets and protocols. Although attempts to combine different protocols and approaches have emerged, this is still an area where further R&D is necessary to provide enough instruments and guarantees to developers and entrepreneurs. The challenge is to deliver cross-chain functionalities without the support of a central governing agency/body.

To conclude

As we’ve highlighted throughout the series, blockchain provides real transformative potential across varying business industries. For a business to truly leverage this technology, the fundamentals we have highlighted must be understood to navigate the complexities of blockchain adoption successfully. 

If you want to start a conversation with the team, feel free to drop us a line.

The post 7 Key Blockchain Principles for Business appeared first on Erlang Solutions.

by Erlang Solutions Team at May 30, 2024 09:46

Blockchain Tech Deep Dive 2/4 | Myths vs. Realities

This is the second part of our ‘Making Sense of Blockchain’ blog post series – you can read part 1 on ‘6 Blockchain Principles’ here. This article is based on the original post by Dominic Perini here.

Join our FinTech mailing list for more great content and industry and events news, sign up here >>

With so much hype surrounding blockchain, we separate the reality from the myths to ensure delivery of the ROI and competitive advantage that you need.
It’s not our aim here to discuss the data structure of blockchain itself, issues like those of transactions per second (TPS) or questions such as ‘what’s the best Merkle tree solution to adopt?’. Instead, we shall examine the state of maturity of blockchain technology and its alignment with the core principles that underpin a distributed ledger ecosystem.

Blockchain technology aims to embrace the following high-level principles:

7 founding principles of blockchain

  • Immutability 
  • Decentralisation 
  • ‘Workable’ consensus
  • Distribution and resilience
  • Transactional automation (including ‘smart contracts’)
  • Transparency and Trust
  • A link to the external world

Immutability of history

In an ideal world it would be desirable to preserve an accurate historical trace of events, and make sure this trace does not deteriorate over time, whether through natural events, human error or by the intervention of fraudulent actors. Artefacts produced in the analogue world face alterations over time while in the digital world the quantized / binary nature of stored information provides the opportunity for continuous corrections to prevent deterioration that might occur over time.

Writing an immutable blockchain aims to retain a digital history that cannot be altered over time. This is particularly useful when it comes to assessing the ownership or the authenticity of an asset or to validate one or more transactions.

We should note that, on top of the inherent immutability of a well-designed and implemented blockchain, hashing algorithms provide a means to encode the information that gets written in the history so that the capacity to verify a trace/transaction can only be performed by actors possessing sufficient data to compute the one-way cascaded encoding/encryption. This is typically implemented on top of Merkle trees where hashes of concatenated hashes are computed.

Legitimate questions can be raised about the guarantees for indefinitely storing an immutable data structure:

  • If this is an indefinitely growing history, where can it be stored once it grows beyond the capacity of the ledgers?
  • As the history size grows (and/or the computing power needed to validate further transactions increases) this reduces the number of potential participants in the ecosystem, leading to a de facto loss of decentralisation. At what point does this concentration of ‘power’ create concerns?
  • How does verification performance deteriorate as the history grows?
  • How does it deteriorate when a lot of data gets written on it concurrently by users?
  • How long is the segment of data that you replicate on each ledger node?
  • How much network traffic would such replication generate?
  • How much history is needed to be able to compute a new transaction?
  • What compromises need to be made on linearisation of the history, replication of the information, capacity to recover from anomalies and TPS throughput?


Further to the above questions, how many replicas converging to a specific history (i.e. consensus) are needed for it to carry on existing? And in particular:

  • Can a fragmented network carry on writing to their known history?
  • Is an approach designed to ‘heal’ any discrepancies in the immutable history of transactions by rewarding the longest fork, fair and efficient?
  • Are the deterrents strong enough to prevent a group of ledgers forming their own fork that eventually reaches wider adoption?


Furthermore, a new requirement to comply with the General Data Protection Regulations (GDPR) in Europe and ‘the right to be forgotten’ introduces new challenges to the perspective of keeping permanent and immutable traces indefinitely. This is important because fines for breaches of GDPR are potentially very severe. The solutions introduced so far effectively aim at anonymising the information that enters the immutable on-chain storage process, while sensitive information is stored separately in support databases where this information can be deleted if required. None of these approaches has yet been tested by the courts. 

The challenging aspect here is to decide upfront what is considered sensitive and what can safely be placed on the immutable history. A wrong choice can backfire at a later stage in the event that any involved actor manages to extract or trace sensitive information through the immutable history.

Immutability represents one of the fundamental principles that motivate the research into blockchain technology, both private and public. The solutions explored so far have managed to provide a satisfactory response to the market needs via the introduction of history linearisation techniques, one-way hashing encryptions, merkle trees and off-chain storage, although the linearity of the immutable history comes at a cost (notably transaction volume).

Decentralisation of control

One of the reactions following the 2008 global financial crisis was against over-centralisation. This led to the exploration of various decentralised mechanisms. The proposition that individuals would like to enjoy the freedom to be independent of a central authority gained in popularity. Self-determination, democratic fairness and heterogeneity as a form of wealth are among the dominant values broadly recognised in Western (and, increasingly, non-Western) society. These values added weight to the movement that introducing decentralisation in a system is positive.

With full decentralisation, there is no central authority to resolve potential transactional issues for us. Traditional, centralised systems have well developed anti-fraud and asset recovery mechanisms which people have become used to. Using new, decentralised technology places a far greater responsibility on the user if they are to receive all of the benefits of the technology, forcing them to take additional precautions when it comes to handling and storing their digital assets.

There’s no point having an ultra-secure blockchain if one then hands over one’s wallet private key to an intermediary whose security is lax: it’s like having the most secure safe in the world then writing the combination on a whiteboard in the same room.

Is the increased level of personal responsibility that goes with the proper implementation of a secure blockchain a price that users are willing to pay? Or, will they trade off some security in exchange for ease of use (and, by definition, more centralisation)? 

Consensus

The consistent push towards decentralised forms of control and responsibility has brought to light the fundamental requirement to validate transactions without a central authority; known as the ‘consensus’ problem. Several approaches have grown out of the blockchain industry, some competing and some complementary.

There has also been a significant focus on the concept of governance within a blockchain ecosystem. This concerns the need to regulate the rates at which new blocks are added to the chain and the associated rewards for miners (in the case of blockchains using proof of work (POW) consensus methodologies). More generally, it is important to create incentives and deterrent mechanisms whereby interested actors contribute positively to the healthy continuation of chain growth.

Besides serving as an economic deterrent against denial of service and spam attacks, POW approaches are amongst the first attempts to automatically work out, via the use of computational power, which ledgers/actors have the authority to create/mine new blocks. Other similar approaches (proof of space, proof of bandwidth etc) followed, however, they all suffer from exposure to deviations from the intended fair distribution of control. Wealthy participants can, in fact, exploit these approaches to gain an advantage via purchasing high performance (CPU / memory / network bandwidth) dedicated hardware in large quantities and operating it in jurisdictions where electricity is relatively cheap. This results in overtaking the competition to obtain the reward, and the authority to mine new blocks, which has the inherent effect of centralising the control. Also, the huge energy consumption that comes with the inefficient nature of the competitive race to mine new blocks in POW consensus mechanisms has raised concerns about its environmental impact and economic sustainability.

Proof of Stake (POS) and Proof of Importance (POI) are among the ideas introduced to drive consensus via the use of more social parameters, rather than computing resources. These two approaches link the authority to the accumulated digital asset/currency wealth or the measured productivity of the involved participants. Implementing POS and POI mechanisms, whilst guarding against the concentration of power/wealth, poses not insubstantial challenges for their architects and developers.

More recently, semi-automatic approaches, driven by a human-curated group of ledgers, are putting in place solutions to overcome the limitations and arguable fairness of the above strategies. The Delegated Proof of Stake (DPOS) and Proof of Authority (POA) methods promise higher throughput and lower energy consumption, while the human element can ensure a more adaptive and flexible response to potential deviations caused by malicious actors attempting to exploit a vulnerability in the system.

Distribution and resilience

Apart from a decentralising authority, control and governance, blockchain solutions typically embrace a distributed peer to peer (P2P) design paradigm. This preference is motivated by the inherent resilience and flexibility that these types of networks have introduced and demonstrated, particularly in the context of file and data sharing.

A centralised network, typical of mainframes and centralised services is clearly exposed to a ‘single point of failure’ vulnerability as the operations are always routed towards a central node. In the event that the central node breaks down or is congested, all the other nodes will be affected by disruptions.

Decentralised and distributed networks attempt to reduce the detrimental effects that issues occurring on a node might trigger on other nodes. In a decentralised network, the failure of a node can still affect several neighbouring nodes that rely on it to carry out their operations. In a distributed network the idea is that the failure of a single node should not impact significantly any other node. In fact, even when one preferential/optimal route in the network becomes congested or breaks down entirely, a message can reach the destination via an alternative route. This greatly increases the chance of keeping a service available in the event of failure or malicious attacks such as a denial of service (DOS) attack.

Blockchain networks where a distributed topology is combined with a high redundancy of ledgers backing a history have occasionally been declared ‘unhackable’ by enthusiasts or, as some more prudent debaters say, ‘difficult to hack’. There is truth in this, especially when it comes to very large networks such as that of Bitcoin. In such a highly distributed network, the resources needed to generate a significant disruption are very high, which not only delivers on the resilience requirement but also works as a deterrent against malicious attacks (principally because the cost of conducting a successful malicious attack becomes prohibitive).

Although a distributed topology can provide an effective response to failures or traffic spikes, you need to be aware that delivering resilience against prolonged over-capacity demands or malicious attacks requires adequate adapting mechanisms. While the Bitcoin network is well positioned, as it currently benefits from a high capacity condition (due to the historical high incentive to purchase hardware by third-party miners), this is not the case for other emerging networks as they grow in popularity. This is where novel instruments, capable of delivering preemptive adaptation combined with back pressure throttling applied to the P2P level, can be of great value.

Distributed systems are not new and, whilst they provide highly robust solutions to many enterprise and governmental problems, they are subject to the laws of physics and require their architects to consider the trade-offs that need to be made in their design and implementation (e.g. consistency vs availability).

Automation

In order to sustain a coherent, fair and consistent blockchain and surrounding ecosystem, a high degree of automation is required. Existing areas with a high demand for automation include those common to most distributed systems. For instance; deployment, elastic topologies, monitoring, recovery from anomalies, testing, continuous integration, and continuous delivery. In the context of blockchains, these represent well-established IT engineering practices. Additionally, there is a creative R&D effort to automate the interactions required to handle assets, computational resources and users across a range of new problem spaces (e.g. logistics, digital asset creation and trading).

The trend of social interactions has seen a significant shift towards scripting for transactional operations. This is where smart contracts and constrained virtual machine (VM) interpreters have emerged – an effort pioneered by the Ethereum project.

The ability to define how to operate an asset exchange, by which conditions and actioned following which triggers, has attracted many blockchain enthusiasts. Some of the most common applications of smart contracts involve lotteries, trade of digital assets and derivative trading. While there is clearly exciting potential unleashed by the introduction of smart contracts, it is also true that it is still an area with a high entry barrier. Only skilled developers that are willing to invest time in learning Domain Specific Languages (DSL) have access to the actual creation and modification of these contracts.

The challenge is to respond to safety and security concerns when smart contracts are applied to edge case scenarios that deviate from the ‘happy path’. If badly-designed contracts cannot properly rollback or undo a miscarried transaction, their execution might lead to assets being lost or erroneously handed over to unwanted receivers.

Another area in high need for automation is governance. Any blockchain ecosystem of users and computing resources requires periodic configurations of the parameters to carry on operating coherently and consensually. This results in a complex exercise of tuning for incentives and deterrents to guarantee the fulfilment of ambitious collaborative and decentralised goals. The newly emerging field of ‘blockchain economics’ (combining economics; game theory; social science and other disciplines) remains in its infancy.

Clearly, the removal of a central ruling authority produces a vacuum that needs to be filled by an adequate decision-making body, which is typically supplied with automation that maintains a combination of static and dynamic configuration settings. Those consensus solutions referred to earlier which use computational resources or social stackable assets to assign the authority, not only to produce blocks but also to steer the variable part of governance, have succeeded in filling the decision making gap in a fair and automated way. Successively, the exploitation of flaws in the static element of governance has hindered the success of these models. This has contributed to the rise in popularity of curated approaches such as POA or DPOS, which not only bring back a centralised control but also reduce the automation of governance.

We expect this to be one of the major areas where blockchain has to evolve in order to succeed in getting widespread market adoption.

Transparency and trust

In order to produce the desired audience engagement for blockchain and eventual mass adoption and success, consensus and governance mechanisms need to operate transparently. Users need to know who has access to what data so that they can decide what can be stored and possibly shared on-chain. These are the contractual terms by which users agree to share their data. As previously discussed users might be required to exercise the right for their data to be deleted, which typically is a feature delivered via auxiliary, ‘off-chain’ databases. In contrast, only hashed information, effectively devoid of its meaning, is preserved permanently on-chain.

Given the immutable nature of the chain history, it is important to decide upfront what data should be permanently written on-chain and what gets written off-chain. The users should be made aware of what data gets stored on-chain and with whom it could potentially be shared. Changing access to on-chain data or deleting it goes against the fundamentals of immutability and therefore is almost impossible. Getting that decision wrong at the outset can significantly affect the cost and usability (and therefore likely adoption) of the particular blockchain in question.

Besides transparency, trust is another critical feature that users legitimately seek. Trust has to go beyond the scope of the people involved as systems need to be trusted as well. Every static element, such as an encryption algorithm, the dependency on a library, or a fixed configuration, is potentially exposed to vulnerabilities.

Link to the external world

The attractive features that blockchain has brought to the internet market would be limited to handling digital assets unless there was a way to link information to the real world. It is safe to say that there would be less interest if we were to accept that a blockchain can only operate under the restrictive boundaries of the digital world, without connecting to the analog real world in which we live.

Technologies used to overcome these limitations including cyber-physical devices such as sensors for input and robotic activators for output, and in most circumstances, people and organisations. As we read through most blockchain white papers we occasionally come across the notion of the Oracle, which in short, is a way to name an input coming from a trusted external source that could potentially trigger/activate a sequence of transactions in a Smart Contract or which can otherwise be used to validate some information that cannot be validated within the blockchain itself.

Bitcoin and Ethereum, still the two dominant projects in the blockchain space are viewed by many investors as an opportunity to diversify a portfolio or speculate on the value of their respective cryptocurrency. The same applies to a wide range of other cryptocurrencies with the exception of fiat pegged currencies, most notably Tether, where the value is effectively bound to the US dollar. Conversions from one cryptocurrency to another and to/from fiat currencies are normally operated by exchanges on behalf of an investor. These are again peripheral services that serve as a link to the external physical world.

Besides oracles and cyber-physical links, interest is emerging in linking smart contracts together to deliver a comprehensive solution. Contracts could indeed operate in a cross-chain scenario to offer interoperability among a variety of digital assets and protocols. Although attempts to combine different protocols and approaches have emerged, this is still an area where further R&D is necessary in order to provide enough instruments and guarantees to developers and entrepreneurs. The challenge is to deliver cross-chain functionalities without the support of a central governing agency/body.

* originally published 2018 by Dominic Perini

For any business size in any industry, we’re ready to investigate, build and deploy your blockchain-based project on time and to budget.

Let’s talk

If you want to start a conversation about engaging us for your fintech project or talk about partnering and collaboration opportunities, please send our Fintech Lead, Michael Jaiyeola, an email or connect with him via Linkedin.

The post Blockchain Tech Deep Dive 2/4 | Myths vs. Realities appeared first on Erlang Solutions.

by Erlang Solutions Team at May 30, 2024 09:06

May 28, 2024

The XMPP Standards Foundation

Scaling up with MongooseIM 6.2.1

MongooseIM is a scalable, extensible and efficient real-time messaging server that allows organisations to build cost-effective communication solutions. Built on the XMPP server, MongooseIM is specifically designed for businesses facing the challenge of large deployments, where real-time communication and user experience are critical. The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend, which simplifies and enhances its scalability.

It is difficult to predict how much traffic your XMPP server will need to handle. This is why MongooseIM offers several means of scalability. Firstly, even one machine can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. As a result, it is recommended to have a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol. Having at least three nodes in the cluster allows you to perform a rolling upgrade, where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance. During such an upgrade, you can increase hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier, because you only need to add new nodes to the already deployed cluster.

Mnesia

Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey, because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

  1. Consistency issues, which tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
  2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
  3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data shared between the cluster nodes. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

Introducing CETS

Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to keep all your persistent data.

Getting rid of Mnesia removes a lot of important obstacles. For example, if you are using Kubernetes, MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up horizontal autoscaling for your installation.

See it in action

If you want quickly set up a working autoscaled MIM cluster using Helm, see the detailed blog post. For more information, consult the documentation, GitHub or the product page. You can try MongooseIM online as well.

Read about Erlang Solution as sponsor of the XSF.

May 28, 2024 00:00

May 26, 2024

Ignite Realtime Blog

New Openfire plugin: XMPP Web!

We are excited to be able to announce the immediate availability of a new plugin for Openfire: XMPP Web!

This new plugin for the real-time communications server provided by the Ignite Realtime community allows you to install the third-party webclient named ‘XMPP Web’ in mere seconds! By installing this new plugin, the web client is immediately ready for use.

This new plugin compliments others that similarly allow to deploy a web client with great ease, like Candy, inVerse and JSXC! With the addition of XMPP Web, the selection of easy-to-install clients for your users to use becomes even larger!

The XMPP Web plugin for Openfire is based on release 0.10.2 of the upstream project, which currently is the latest release. It will automatically become available for installation in the admin console of your Openfire server in the next few days. Alternatively, you can download it immediately from its archive page.

Do you think this is a good addition to the suite of plugins? Do you have any questions or concerns? Do you just want to say hi? Please stop by our community forum or our live groupchat!

For other release announcements and news follow us on Mastodon or X

13 posts - 5 participants

Read full topic

by guus at May 26, 2024 17:50

May 23, 2024

Erlang Solutions

Balancing Innovation and Technical Debt

Let’s explore the delicate balance between innovation and technical debt. 

We will look into actionable strategies for managing debt effectively while optimising our infrastructure for resilience and agility.

Balancing acts and trade-offs

I was having this conversation with a close acquaintance not long ago. He’s setting up his new startup, filling a market gap he’s found, rushed before the gap closes in. It’s a common starting point for many entrepreneurs. You have an idea you need to implement, and until it is implemented and (hopefully) sold, there is no revenue, all while someone else can close the gap before you do. Time-to-market is key.

While there’s no revenue, you acquire debt. But while reasonably careful to keep it under control, you pay the Financial Debt off with a different kind of debt: Technical Debt. You choose to make a trade-off here, a trade-off that all too often is taken without awareness. This trade-off between debts requires careful thinking too, just as much as financial debt is an obvious risk, so is a technical one.

Let’s define these debts. Technical is the accumulated cost of shortcuts or deferred maintenance in software development and IT infrastructure. Financial is the borrowing of funds to finance business operations or investments. They share a common thread: the trade-off between short-term gains and long-term sustainability.

Just like financial debt can provide immediate capital for growth, it can also drag the business into financial inflexibility and burdensome interest rates. Technical debt expedites product development or reduces time-to-market, at the expense of increased maintenance, reduced scalability, and decreased agility. It is an often overlooked aspect of a technological investment, whose prompt care can have a huge impact on the lifespan of the business. As an enterprise must manage its financial leverage to maintain solvency and liquidity, it must also manage its technical debt to ensure the reliability, scalability, and maintainability of their systems and software.

The Economics of Technical Debt

Consider the example of a rapidly growing e-commerce platform: appeal attracts demand, demand requires resources, and resources mean increased vulnerability: the increasing user data and resources attract threats, aiming to disrupt services, steal sensitive data, or cause reputational harm. In this environment, the platform’s success is determined by its ability to strike a delicate balance between serving legitimate customers and thwarting malicious actors, where both play ever-increasing proportions.

Early on, the platform prioritised rapid development and deployment of new features; however, in their haste to innovate, the technical team accumulated debt by taking shortcuts and deferring critical maintenance tasks. What results from this is a platform that is increasingly fragile and inflexible, leaving it vulnerable to disruptive attacks and more agile competitors. Meanwhile, reasonably, the platform’s financial team kept allocating capital to funding marketing campaigns, product launches, and strategic acquisitions, under pressure to maximise profitability and shareholder value; however, they neglected to allocate sufficient resources towards cybersecurity initiatives, viewing them as discretionary expenses rather than critical investments in risk mitigation and resilience.

Technical currencies

If we’re talking about debt, and drawing a parallel with financial terms, let’s complete the parallel. By establishing the concept of currencies, we can build quantifiable metrics of value that reflect the health and resilience of digital assets. Code coverage, for instance, measures the proportion of codebase exercised by automated tests, providing insights into the potential presence of untested or under-tested code paths. In this line, tests and documentation are the two assets that pay the highest technical debt. 

See for example how coverage for MongooseIM has been continuously trending higher.

Similarly, Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the process of integrating code changes, running automated tests, verifying engineering work, and deploying applications to diverse environments, enabling teams to deliver software updates frequently and with confidence. By streamlining the development workflow and reducing manual intervention, CI/CD pipelines enhance productivity, accelerate time-to-market, and minimise the risk of human error. Humans have bad days and sleepless nights, well-developed automation doesn’t.

Additionally, valuations on code quality that are diligently tracked on the organisation’s ticketing system provide valuable insights into the evolution of software assets and the effectiveness of ongoing efforts to address technical debt and improve code maintainability. These valuations enable organisations to prioritise repayment efforts, allocating resources effectively.

Repaying Technical Debt

The longer any debt remains unpaid, the greater its impact on the organisation — (technical) debt accrues “interest” over time. But, much like in finances, a debt is paid with available capital, and choosing a payment strategy can make a difference in whether capital is wasted or successfully (re)invested:

  1. Priorities and Plans: Identify and prioritise areas of technical debt based on their impact on the system’s performance, stability, and maintainability. Develop a plan that outlines the steps needed to address each aspect of technical debt systematically.
  2. Refactoring: Allocate time and resources to refactor code and systems to improve their structure, readability, and maintainability. Break down large, complex components into smaller, more manageable units, and eliminate duplicate or unnecessary code. See for example how we battled technical debt in MongooseIM.
  3. Automated Testing: Invest in automated testing frameworks and practices to increase test coverage and identify regression issues early in the development process. Implement continuous integration and continuous deployment (CI/CD) pipelines to automate the testing and deployment of code changes. Establishing this pipeline is always the first step into any new project we join and we’ve become familiar with diverse CI technologies like GitHub Actions, CircleCI, GitlabCI, or Jenkins.
  4. Documentation: Enhance documentation efforts to improve understanding and reduce ambiguity in the codebase. Document design decisions, architectural patterns, and coding conventions to facilitate collaboration and knowledge sharing among team members. Choose technologies that facilitate and enhance documentation work.

Repayment assets

Repayment assets are resources or strategies that can be leveraged to make debt repayment financially viable. Here are some key repayment assets to consider:

  1. Training and Education: Provide training and education opportunities for developers to enhance their skills and knowledge in areas such as software design principles, coding best practices, and emerging technologies. Encourage continuous learning and professional development to empower developers to make informed decisions and implement effective solutions.
  2. Technical Debt Reviews: Conduct regular technical debt reviews to assess the current state of the codebase, identify areas of concern, and track progress in addressing technical debt over time. Use metrics and KPIs to measure the impact of technical debt reduction efforts and inform decision-making.
  3. Collaboration and Communication: Foster a culture of collaboration and communication among development teams, stakeholders, and other relevant parties. Encourage open discussions about technical debt, its implications, and potential strategies for repayment, and involve stakeholders in decision-making processes.
  4. Incremental Improvement: Break down technical debt repayment efforts into smaller, manageable tasks and tackle them incrementally. Focus on making gradual improvements over time rather than attempting to address all technical debt issues at once, prioritising high-impact and low-effort tasks to maximise efficiency and effectiveness.

Don’t acquire more debt than you have to

While debt is a quintessential aspect of entrepreneurship, acquiring it unwisely is obviously shooting in one’s foot. You’ll have to make many decisions and choose over many trade-offs, so you better be well-informed before putting your finger on the red buttons.

Your service will require infrastructure

Whether you choose one vendor over another or decide to go self-hosted, use containerised technologies, so that future changes to better infrastructures are possible. Containers also provide a consistent environment for development, testing and production. Choose technologies that are good citizens in containerised environments.

Your service will require hardware resources

Whether you choose one or another hardware architecture or any amount of memory, use runtimes that can efficiently use and adapt to any given hardware, so that future changes to better hardware are fruitful. For example Erlang’s concurrency model is famous for automatically taking advantage of any number of cores, and with technologies like Elixir’s Nx you can take advantage of esoteric GPUs and TPUs hardware for your machine learning tasks.

Your service will require agility

The market will push your offerings to its limit, in a never-ending stream of requests for new functionality and changes to your service. Your code will need to change, and respond to changes. From Elixir‘s metaprogramming and language extensibility to Gleam‘s strong type-safety, prioritise tools that likewise aid your developers to change things safely and powerfully.

Your service will require resiliency

There are two philosophies in the culture of error handling: either it is mathematically proven that errors cannot happen – Haskell’s approach – or it is assumed they can’t always be avoided and we need to learn to handle them – Erlang’s approach. Wise technologies take one starting point as an a-priori foundation of the technology and, a-posteriori, deal with the other end. Choose wisely your point on the scale, and be wary of technologies that don’t take a safe stance. Errors can happen: electricity goes down, cables are cut, and attackers attack. Programmers have bad sleepless nights or get sick. Take a stance, before errors bite your service.

Your service will require availability

No fancy unique idea will sell if it can’t be bought, and no service will be used if it is not there to begin with. Unavailability takes an exponential toll on your revenue, so prioritise availability. Choose technologies that can handle not just failure, but even upgrades (!), without downtime. And to have real availability, you always need at least two computers, in case one dies: choose technologies that make many independent computers cooperate easily and can take over another’s work transparently.

A Case Study: A Balancing Act in Traffic Management

A chat system, like many web services, handles a countably infinite number of independent users. It is a heavily network-based application that needs to respond to requests that are independent of each other in a timely and fair manner. It is an embarrassingly parallel problem, messages can be processed independently of each other, but it is also a challenge of soft real-time properties, where messages should be processed sufficiently soon for a human to have a good user experience. It also faces the challenge of bad actors, which makes requests blacklisting and throttling necessary.

MongooseIM is one such system. It is written in Erlang, and in its architecture, every user is handled by one actor.

It is containerised, and easily uses all available resources efficiently and smoothly, adapting to any change of hardware, from small embedded systems to massive mainframes. Its architecture uses the Publish-Subscribe programming pattern heavily, and because Erlang is a functional language, functions are first-class citizens, and therefore functions are installed to handle all sorts of events extensively because we never know what new functionality we will need to implement in the future.

One important event is a new session starting: mechanisms for blacklisting are plenty, whether they’re based on specific identifiers, IP regions, or even modern AI-based behaviour analysis, we can’t predict the future,  so we simply publish the “session opened” event and leave for future us to install the right handler when is needed.

Another important event is that of a simple message being sent. What if bad actors have successfully opened sessions and start flooding the system, consuming the CPU and Database unnecessarily? Again, changing requirements might dictate the system is to handle some users with preferential treatment. One default option is to slow down all message processing within some reasonable rate, for which we use a traffic shaping mechanism called the Token Bucket algorithm, implemented in our library Opuntia – named that way because if you touch it too fast, it stings you.

You can read more about how scalable MongooseIM is in this article, where we pushed it to its limit. And while we continuously load-test our server, we haven’t done another round of limit-pushing since then, stay tuned for a future blog when we do just this!

Lessons Learned

Technical Debt has an inherent value akin to Financial Debt. Choosing the right tool for the job means acquiring the right Technical Debt when needed – leveraging strategies, partnerships, and solutions, that prioritise resilience, agility, and long-term sustainability.

The post Balancing Innovation and Technical Debt appeared first on Erlang Solutions.

by Nelson Vides at May 23, 2024 10:58

May 21, 2024

JMP

Newsletter: SMS Routes, RCS, and more!

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

SMS Censorship, New Routes

We have written before about the increasing levels of censorship across the SMS network. When we published that article, we had no idea just how bad things were about to get. Our main SMS route decided at the beginning of April to begin censoring all messages both ways containing many common profanities. There was quite some back and forth about this, but in the end this carrier has declared that the SMS network is not meant for person-to-person communication and they don’t believe in allowing any profanity to cross their network.

This obviously caused us to dramatically step up the priority of integration with other SMS routes, work which is now nearing completion. We expect very soon to be offering long-term customers with new options which will not only dramatically reduce the censorship issue, but also in some cases remove the max-10 group text limit, dramatically improve acceptance by online services, and more.

RCS

We often receive requests asking when JMP will add support for RCS, to complement our existing SMS and MMS offerings. We are happy to announce that we have RCS access in internal testing now. The currently-possible access is better suited to business use than personal use, though a mix of both is certainly possible. We are assured that better access is coming later in the year, and will keep you all posted on how that progresses. For now if you are interested in testing this, especially if you are a business user, please do let us know and we’ll let you know when we are ready to start some testing.

One thing to note is that “RCS” means different things to different people. The main RCS features we currently have access to are typing notifications, displayed/read notifications, and higher-quality media transmission.

Cheogram Android

Cheogram Android 2.15.3-1 was released this month, with bug fixes and new features including:

  • Major visual refresh, including optional Material You
  • Better audio routing for calls
  • More customizable custom colour theme
  • Conversation read-status sync with other supporting apps
  • Don’t compress animated images
  • Do not default to the network country when there is no SIM (for phone number format)
  • Delayed-send messages
  • Message loading performance improvements

New GeoApp Experiment

We love OpenStreetMap, but some of us have found existing geocoder/search options lacking when it comes to searching by business name, street address, etc. As an experimental way to temporarily bridge that gap, we have produced a prototype Android app (source code) that searches Google Maps and allows you to open search results in any mapping app you have installed. If people like this, we may also extend it with a server-side component that hides all PII, including IP addresses, from Google, for a small monthly fee. For now, the prototype is free to test and will install as “Maps+” in your launcher until we come up with a better name (suggestions welcome!).

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at May 21, 2024 19:22

May 17, 2024

Erlang Solutions

Instant Scalability with MongooseIM and CETS

The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend which makes it much easier to scale up.

It is difficult to predict how much traffic your XMPP server will need to handle. Are you going to have thousands or millions of connected users? Will you need to deliver hundreds of millions of messages per minute? Answering such questions is almost impossible if you are just starting up. This is why MongooseIM offers several means of scalability.

Clustering

Even one machine running MongooseIM can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. This is why we recommend using a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol. Having at least three nodes in the cluster allows you to perform a rolling upgrade, where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance and eliminating unnecessary downtime. During such an upgrade procedure, you can increase the hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier because you only need to add new nodes to the already deployed cluster.

Mnesia

Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

  1. Consistency issues tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
  2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
  3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

After trying to mitigate such issues for a couple of years, we have concluded that it is best not to use Mnesia at all. First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

Introducing CETS

Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to store all your persistent data. Getting rid of Mnesia removes the last obstacle on your way to easy and simple management of MongooseIM. For example, if you are using Kubernetes, MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up automatic scaling of your installation.

Installing with Helm

As an example, let’s quickly set up a cluster of three MongooseIM nodes. You will need to have Helm and Kubernetes installed. The examples were tested with Docker Desktop, but they should work with any Kubernetes setup. As the first step, let’s install and initialise a PostgreSQL database with Helm:

$ curl -O https://raw.githubusercontent.com/esl/MongooseIM/6.2.1/priv/pg.sql
$ helm install db oci://registry-1.docker.io/bitnamicharts/postgresql \
   --set auth.database=mongooseim --set auth.username=mongooseim --set auth.password=mongooseim_secret \
   --set-file 'primary.initdb.scripts.pg\.sql'=pg.sql

It is useful to monitor all Kubernetes resources in another shell window:

$ watch kubectl get pod,sts,pvc,pv,svc,hpa

As soon as pod/db-postgresql-0 is shown as ready, you can check that the DB is running:

$ kubectl exec -it db-postgresql-0 -- \
  env PGPASSWORD=mongooseim_secret psql -U mongooseim -c 'SELECT * from users'

As a result, you should get an empty list of MongooseIM users. Next, let’s create a three-node MongooseIM cluster using the Helm Chart:

$ helm repo add mongoose https://esl.github.io/MongooseHelm/
$ helm install mim mongoose/mongooseim --set replicaCount=3 --set volatileDatabase=cets \
   --set persistentDatabase=rdbms --set rdbms.tls.required=false --set rdbms.host=db-postgresql \
   --set resources.requests.cpu=200m

By setting persistentDatabase to RDBMS and volatileDatabase to CETS, we are eliminating the need for Mnesia, so no PVC’s are created. To connect to PostgreSQL, we specify db-postgresql as the database host. The requested CPU resources are 0.2 of a core per pod, and they will be useful for autoscaling. You can monitor the shell window, where watch kubectl … is running, to make sure that all MongooseIM nodes are ready. It is useful to verify logs as well, e.g. kubectl logs mongooseim-0 should display logs from the first node. To see how easy it is to scale up horizontally, let’s increase the number of MongooseIM nodes (which correspond to Kubernetes pods) from 3 to 6:

$ kubectl scale --replicas=6 sts/mongooseim

You can use kubectl logs -f mongooseim-0 to see the log messages about each newly added node of the CETS cluster. With helm upgrade, you can do rolling upgrades and scaling as well. The main difference is that the changes done with helm are permanent.

Autoscaling

Should you need automatic scaling, you can set up the Horizontal Pod Autoscaler. Please ensure that you have the Metrics Server installed. There are separate instructions to install it in Docker Desktop. We have already set the requested CPU resources to 0.2 of a core per pod, so let’s start the autoscaler now:

$ kubectl autoscale sts mongooseim --cpu-percent=50 --min=1 --max=8

It is going to keep the CPU usage at 0.1 (which is 50% of 0.2) of a core per pod. The threshold is so low to be able to easily trigger scaling up, and in any real application, it should be much higher. You should see the cluster getting scaled down until it has just one node because there is no CPU load yet. See the reported targets in the window, where you have the watch kubectl … command running. To trigger scaling up, we need to put some load on the server. We could just fire up random HTTP requests, but let’s instead use the opportunity to explore MongooseIM CLI and GraphQL API. Firstly, create a new user on the first node with the CLI:

$ kubectl exec -it mongooseim-0 -- \
  mongooseimctl account registerUser --domain localhost --username alice --password secret

Next, you can send XMPP messages in a loop with the GraphQL Client API:

$ LB_HOST=$(kubectl get svc mongooseim-lb \
  --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ BASIC_AUTH=$(echo -n 'alice@localhost:secret' | base64)
$ while true; \
  do curl --get -N -H "Authorization:Basic $BASIC_AUTH" \
    -H "Content-Type: application/json" --data-urlencode \
    'query=mutation {stanza {sendMessage(to: "alice@localhost", body: "Hi") {id}}}' \
    http://$LB_HOST:5561/api/graphql; \
  done

You should observe new pods being launched as the load increases. If there is not enough load, run the snippet in a few separate shell windows. Stopping the script should bring the cluster size back down.

Summary

Thanks to CETS and the Helm Chart, MongooseIM 6.2.1 can be easily installed, maintained and scaled in a cloud environment. What we have shown here are the first steps, and there is much more to explore. To learn more, you can read the documentation for MongooseIM or check out the live demo at trymongoose.im. Should you have any questions, or if you would like us to customise, configure, deploy or maintain MongooseIM for you, feel free to contact us.

The post Instant Scalability with MongooseIM and CETS appeared first on Erlang Solutions.

by Pawel Chrzaszcz at May 17, 2024 10:22

May 14, 2024

Erlang Solutions

The Golden Age of Data Transformation in Healthcare

Data is the lifeline of the healthcare industry and we are in its golden age. Staggering amounts are generated daily and precision is key to ensure that every scan, test, prescription, and diagnosis produces data that leads to improved patient outcomes and quality future care.

But what happens when the liquid gold that is reliable data can no longer be accessed? 

Trust and confidence hinge on the access of patient information to provide a holistic view of patient history and analytics to gain actionable insights. 

Any disruption to this access could compromise a patient’s health, have a negative impact on organisational reputation with potential financial consequences.

Let’s further examine data’s profound impact on healthcare, emphasising its indispensable role in enhancing clinical practices, operational efficiency and fostering patient welfare.

Accessing data

Effective data doesn’t just sit in silos. It is all deeply interconnected to provide a value far beyond a single use case. We’re talking about a system of multiple processes relying on data to be accessed across medical records, diagnostics, devices, medical histories and more.

This is where interoperability comes in. When technologies are interoperable, they become a single, cohesive unit, designed for seamless integration between internal and external systems.

The need for interoperability

The need for interoperability cannot be overstated. 

Data generated in healthcare delivery has been increasing by 47% yearly. Interoperability allows for various stakeholders within the healthcare network to share access. Think of your pharmacies, hospitals, clinics, and insurers, who use that data to access:

  • Billing information
  • Patient data and medical records
  • Wider population data

While most healthcare providers can agree that they need to adopt interoperability to improve their data quality, it is reported that less than 40% of providers (US) have done so well enough to be able to share data across other organisations. There is a clear discrepancy between the increase in healthcare data and the success of system integration. 

Interoperability holds immense promise for the world of healthcare, but there are immediate challenges to be addressed:

  • Too much data- Preventing overflowing data is not an easy task. Interoperability deals with the influx of EHR (Electronic health network) and EMR (Electronic medical records) information. It also manages data from IoT sources, internal administrative systems and more. Failing to handle them can prove disruptive to overall systems.
  • Lack of resources- Oftentimes, areas of the healthcare industries often lack financial resources to make the changes required. There are initial investments required to make systems interoperable, but it allows for much larger long-term savings.
  • Questionable data exchange practices- Interoperability has been used to simply line the pockets of providers. Known as information blocking, fees are now being imposed to provide digital access to healthcare data as new revenue streams are identified

We’ll revisit the current state of data exchange practices in more detail. 

But first, let’s explore how IoT (Internet of Things) can act as a solution to the aforementioned problems, leveraging technologies to drive success.

Exploring IoT

While the healthcare industry struggles with interoperability, it is very well-known for its agility in adopting new technologies. It continuously innovates through the vast landscape known as the Internet of Things (IoT). 

IoT allows for new, innovative applications and services that have the potential to completely transform the face of healthcare. 

There are some majorly compelling use cases for IoT in healthcare, and a lot of its benefits are tangible. Some increasingly popular services include:

  • Telemedicine
  • Remote patient monitoring
  • Smart hospitals
  • Asset tracking and management

A popular example of healthcare using IoT for asset tracking and asset management is from HCA Healthcare. Operating in over 2,000 healthcare facilities across the US, HCA has implemented RFID tags designed to track medical equipment and supplies, all to enhance asset tracking, management and interoperability with other healthcare information.

So what’s the issue? Let’s return to the point about data exchange practices. There is an argument for the restrictive nature of this operating system. Data collected through RFID tags is not easily accessible or shareable with other providers or systems, which could hinder the exchange of information, leading to data blocking. 

There is also an issue of cost. Initial investment aside, hospitals will be forced to upgrade their existing devices to allow the data from the devices to be sent automatically, potentially costing millions for an establishment. Consider the financial impact that could have on a single hospital ward, let alone an entire annual hospital budget. All of these considerations could impact HCA’s ability to fully leverage RFID technology and overcome potential data-blocking issues.

Data Solutions

When exploring solutions to potential data blocking, it’s worth considering systems that allow facilities to make the most of their existing systems without the need to replace existing medical equipment or incur data access charges from medical device manufacturers who have spotted a potential new revenue model. 

There are technologies available with capabilities that help to address the interoperability challenges facing the industry. Reliability is key, and these languages enable cost-effective solutions, designed to seamlessly integrate subsystems, ensuring the efficient exchange of data without the need for a complete system overhaul. 

Organisations should always take care when implementing any sort of IoT solution and seamless, cost-effective integrations should always be top of mind.

Securely moving data

A staggering 200 million+ healthcare patient records have been exposed to data breaches in the past decade alone. The healthcare industry is positioned as the most expensive sector for the cost of data breaches 13 years in a row, according to IBM’s Cost of Data Breach report. 

Confidential patient information, financial details and other sensitive data have been compromised. This knock-on effect ultimately compromised various elements of healthcare confidentiality for both healthcare providers and patients alike. Amidst this growing challenge lies a need for secure, compliant healthcare.

Utilising Blockchain

By utilising blockchain technology, healthcare providers have access to enhanced privacy and integrity of their medical data, which minimises the associated risks of cyberattacks and security breaches.

Blockchain technology can provide a great solution for securely moving healthcare data. This is thanks to blockchain’s distributed ledger technology (DLT). This technology facilitates the secure transfer of patient’s medical records. It also helps to strengthen data defences and allows for the improved management of the medicine supply chain.

But there are incurred costs to consider. When moving data into Patient Information Systems (PIs), there may be initial upfront costs and maintenance costs to consider. Healthcare providers must weigh this against their budget and need for data security and compliance with privacy regulations.

Regulations and compliance

As well as the financial considerations, other weaknesses in blockchain technology must be considered. 

This includes a lack of standardisation, accessibility and regulatory powers. Take the regulatory body HIPAA. They have strict mandates in place to protect healthcare information. When discussing public blockchain, data privacy becomes an issue. Public blockchain is designed for transparent transactions, going against HIPAA regulations, and making public blockchains incompatible. Failure to adhere can lead to fines and various non-compliance penalties.

Moving to private blockchain also poses its obstacles: 

Issues with centralisation: Private blockchains can offer more control over data access and governance, but there are still issues surrounding who owns the data. HIPAA requires a clear centralisation of data and ownership and must be adhered to by private blockchains.

Standardised data: HIPAA requires consistent data formats to ensure an accurate data exchange. Achieving this across multiple private blockchains is difficult and could have an impact on collaboration and overall data sharing.

Interoperability: There are various stakeholders involved across many institutions such as insurers, hospitals etc, therefore interoperability is needed to have an effective exchange of data.

Leveraging innovative communication

Healthcare companies looking to manage their patient data and communications are adopting a host of apps and new comms channels to reliably share data. For example:

  • Electronic Health Records (EHR)
  • Electronic Medical Records (EMR)
  • Imaging data
  • Wearables 

But managing healthcare by these various means raises pertinent issues surrounding data security and privacy. Data needs to be stored securely and with the utmost confidentiality. Healthcare personnel must also keep on top of the latest technological advances to ensure data is not vulnerable to hacks or security breaches. But these system upgrades also come at a further long and short-term financial cost to maintain.

There are other ways to leverage secure and effective communication within the healthcare industry using different challenges, as highlighted by Pando Health.  

Developed by junior doctors and technologists, they sought to address the need for secure communication platforms for healthcare professionals. 

While they initially used a SaaS messaging platform for their prototype m they soon faced scalability limitations. Through the use of MongooseIM, an open source highly scalable messaging server, they were able to revolutionise the needs of healthcare communication, without having to replace the entire system.

The results?

  • A secure, NHS-approved chat system.
  • A medical app designed for secure and compliant communication, used by over 65,000 professionals.
  • A collaborative platform, designed for medical professionals without compromising patient security.

There are options for healthcare organisations to ensure secure data channels while complying with legislative requirements and maintaining patient confidentiality.

Being future-ready/ proof

We’ve already mentioned that data volume in healthcare will continue to expand exponentially. The challenge now lies in ensuring that healthcare providers are providing a strategic approach to brace themselves for this future growth. Lack of strategy leads to a loss of control over the access and organisation of your data, impacting those patients who need care the most.
When compared to other industries, healthcare already falls behind in the Future-ReadyBusiness Benchmark. But positive steps are being taken industry-wide to ensure 2024 specifically strengthens the healthcare industry, as we move towards digital-based healthcare, thanks to key trends and breakthrough innovations.

Implementing improved systems

Managing masses of data is becoming increasingly difficult. The need for rapid and reliable access to data combined with the need for data to be retained for extended periods of time presents some serious archival and storage challenges. A lot of these issues are near impossible with existing healthcare legacy systems.

Organisations require scalability and reliability to improve services and modernise. Many places have already started to adopt solutions to consolidate storage and data needs into long-term, future strategies. 

Some of these systems include:

  • The Internet of Medical Things – Those companies who specialise in IoMT often partner with software professionals, designed to connect to wearables- tracking key health metrics like blood pressure and heart rate in real time. 
  • Scalable telehealth services- There are various telehealth systems, based on a scalable mobile health system where data from patients is acquired and transmitted via wireless communication networks.
  • Machine learning- Algorithms offer auto-scaling to derive insights from continuously increasing healthcare data.

Adopting forward-thinking strategies becomes imperative as the healthcare industry strives to modernise and improve its services. Embracing reliable and scalable services is the only way to ensure longevity and effective management for the long-term care of patients in the digital age.

To conclude

The journey and ever-evolving complexities of healthcare data mark what we can call the Golden Age of Data Transformation. 

Accessing data wherever it is created and stored is a key priority for any digital transformation strategy. 

As we aim for the improvement of operational efficiency and patient outcomes, prioritising data quality, accessibility and interoperability of systems is non-negotiable. Organisations should focus on building scalable and robust infrastructures to tackle these challenges.

Staying flexible and investing in long-term strategies empower healthcare professionals to navigate the data landscape effectively, ultimately delivering better care for patients. 

The post The Golden Age of Data Transformation in Healthcare appeared first on Erlang Solutions.

by Erlang Solutions Team at May 14, 2024 10:23

Comparing Elixir vs Java

After many years of active development using various languages, in the past months, I started learning Elixir. I got attracted to the language after I heard and read nice things about it and the BEAM VM, but – to support my decision about investing time to learn a new language – I tried to find a comparison between Elixir and various other languages I already knew.

What I found was pretty disappointing. In most of these comparisons, Elixir performed much worse than Java, even worse than most of the mainstream languages. With these results in mind, it became a bit hard to justify my decision to learn a new language with such a subpar performance, however fancy its syntax and other features were. After delving into the details of these comparisons, I realised that all of them were based on simplistic test scenarios and specialised use cases, basically a series of microbenchmarks (i.e. small programs created to measure a narrow set of metrics, like execution time and memory usage). It is obvious that the results of these kinds of benchmarks are rarely representative of real-life applications.

My immediate thought was that a more objective comparison would be useful not only for me but for others as well. But before discussing the details, I’d like to compare several aspects of Elixir and Java that are not easily quantifiable.

Development

Learning curve

Before I started learning Elixir, I used various languages like Java, C, C++, Perl, and Python. Despite that, all of them are imperative languages and Elixir is a functional language, I found the language concepts clear and concise, and – to tell the truth – much less complex than Java. Similarly, Elixir syntax is less verbose and easier to read and see through.

When comparing language complexities, there is an often forgotten, but critical thing: It’s hard to develop anything more complex than a Hello World application just by using the core language. To build enterprise-grade software, you should use at least the standard library, but in most cases, many other 3rd party libraries. They all contribute to the learning curve.

In Java, the standard library is part of the JDK and provides basic support for almost every possible use, but lacked the most important thing, the component framework (like Spring Framework or OSGi), for about 20 years. During that time, several good component frameworks were developed and became widespread, but they all come with different design principles, configuration and runtime behaviour, so for a novice developer, the aggregated learning curve is pretty steep.On the other side, Elixir has the OTP from the beginning, a collection of libraries once called Open Telecom Platform. OTP provides its own component framework which shares the same concepts and design principles as the core language.

Documentation

I was a bit spoiled by the massive amount of tutorials, guides and forum threads of the Java ecosystem, not to mention the really nice Javadoc that comes with the JDK. It’s not that Elixir lacks the appropriate documentation, there are really nice tutorials and guides, and most of the libraries are comparably well documented as their Java counterparts, but it will take time for the ecosystem to reach the same level of quality. There are counterexamples, of course, the Getting Started Guide is a piece of cake, I didn’t need anything else to learn the language and start active development.

IDE support

For me as a novice Elixir developer, the most important roadblock was the immature IDE support. Although I understand that supporting a dynamically typed language is much harder than a statically typed one like Java, I’m missing the most basic refactoring support from both the IntelliJ IDEA and VSCode. I know that Emacs offers more features, but being a hardcore vi user, I kept some distance from it.

Fortunately, these shortcomings can be improved easily, and I’m sure there are enough interested developers in the open-source world, but as usual, some coordination would be needed to facilitate the development.

Programming model

Comparing entire programming models of two very different languages is too much for a blog entry, so I’d like to focus on the language support for performance and reliability, more precisely several aspects of concurrency, memory management and error handling.

Concurrency and memory management

The Java Memory Model is based on POSIX Threads (pthreads). Heap memory is allocated from a global pool and shared between threads. Resource synchronisation is done using locks and monitors. A conventional Java thread (Platform Thread) is a simple wrapper around an OS thread. Since an OS thread comes with its own large stack and is scheduled by the OS, it is not lightweight in any way. Java 21 introduced a new thread type (Virtual Thread) which is more lightweight and scheduled by the JVM, so it can be suspended during a blocking operation, allowing the OS thread to mount and execute another Virtual Thread. Unfortunately, this is only an afterthought. While it can improve the performance of many applications, it makes the already complex concurrency model even more complicated. The same is true for Structured Concurrency. While it can improve reliability, it will also increase complexity, especially if it is mixed with the old model. This is also true for the 3rd party libraries, adopting the new features, and upgrading the old deployments will take time, typically years. Until that, a mixed model will be used which can introduce additional issues.

There are several advantages of adopting POSIX Threads, however: it is familiar for developers of languages implementing similar models (e.g. C, C++ etc.), and keeps the VM internals fairly simple and performant. On the other hand, this model makes it hard to effectively schedule tasks and heavily constrains the design of reliable concurrent code. And most importantly, it introduces issues related to concurrent access to shared resources. These issues can materialise in performance bottlenecks and runtime errors that are hard to debug and fix.

The concurrency model of Elixir is based on different concepts, introduced by Erlang in the 80s. Instead of scheduling tasks as OS threads, it uses a construct called “process”, which is different from an operating system process. These processes are very lightweight, operating on independently allocated/deallocated memory areas and are scheduled by the BEAM VM. Scheduling is done by multiple schedulers, one for each CPU core. There is no shared memory, synchronised resource access, or global garbage collection, inter-process communication is performed using asynchronous signalling. This model eliminates the conventional concurrency-related problems and makes it much easier to write massively concurrent, scalable applications. There is one drawback, however: due to these conceptual differences, the learning curve is a bit steeper for developers experienced only with pthreads-related models.

Fault tolerance

Error recovery and fault tolerance in general are underrated in the world of enterprise software. For some reason, we think that fault tolerance is for mission-critical applications like controlling nuclear power plants, running medical devices or managing aircraft avionics. In reality, almost every business has critical software assets and applications that should be highly available or data, money and consumer trust will be lost. Redundancy may prevent critical downtimes, but no amount of redundancy can mitigate the risk of data corruption or other similar errors, not to mention the cost of duplicated resources.

Java and Elixir handle errors in very different ways. While Java follows decades-old conventions and treats errors as exceptional situations, Elixir inherited a far more powerful concept from Erlang, originally borrowed from the field of fault-tolerant systems. In Elixir, errors are part of the normal behaviour of the application and are treated as such. Since there are no shared resources between processes, an error during the execution of a process does not affect nor propagate to the others; their states remain consistent, so the application can safely recover from the error. In addition, supervision trees can make sure that the failed components will be replaced immediately.

This way, the BEAM VM provides guarantees against data loss during error recovery. But this kind of error recovery is possible only if no errors can leave the system in an inconsistent state. Since Java relies on OS threads, and shared memory can’t be protected from incorrectly behaving threads, under the JVM, there are no such safeties. Although there are Java libraries that provide better fault tolerance by implementing different programming models (probably the most noteworthy is Akka, implementing the Actor Model), the number of 3rd party libraries supporting these programming models is very limited.

Runtime

Performance

For CPU or memory-intensive tasks, Java is a good choice, due to several things, like a more mature Just In Time compiler and tons of runtime optimisations in the JVM, but most importantly, because of its memory model. Since memory allocation and thread handling are basically done on OS level, the management overhead is very low.

On the other hand, this advantage vanishes when concurrent execution is paired with a mixed workload, like blocking operations and data exchange between concurrent tasks. This is the field where Elixir thrives since Erlang and the BEAM VM were originally designed for these kinds of tasks. Due to the well-designed concurrency model, memory and other resources are not shared, requiring no synchronisation. BEAM processes are more lightweight than Java threads, and their scheduling is done at VM level, leading to fewer context switches and better scheduling granularity.

Concurrent operations also affect memory use. Since a Java thread is not lightweight, the more threads are waiting for execution, the more memory is used. In parallel with the memory allocations related to the increasing number of waiting threads, the overhead caused by garbage collection also grows.

Today’s enterprise applications are usually network-intensive. We have separate databases, microservices, clients accessing our services via REST APIs etc. Compared to operations on in-memory data, network communication is many orders of magnitude slower, latency is not deterministic, and the probability of erroneous responses, timeouts or infrastructure-related errors is not negligible. In this environment, Elixir and the BEAM VM offer more flexibility and concurrent performance than Java.

Scalability

When we talk about scalability, we should mention both vertical and horizontal scalability. While vertical scalability is about making a single hardware bigger and stronger, horizontal scalability deals with multiple computing nodes.

Java is a conventional language in a sense that is built for vertical scaling, but it was designed at a time when vertical scaling meant running on bigger hardware with better single-core performance. It performs reasonably well on multi-core architectures, but its scalability is limited by its concurrency model since massive concurrency comes with frequent cache invalidations and lock contention on shared resources. Horizontal scaling enlarges these issues due to the increased latency. Moreover, since the JVM was also designed for vertical scaling, there is no simple way to share or distribute workload between multiple nodes, it requires additional libraries/frameworks, and in many cases, different design principles and massive code changes.

On the other hand, a well-designed Elixir application can scale up seamlessly, without code changes. There are no shared resources that require locking, and asynchronous messaging is perfect for both multi-core and multi-node applications. Of course, Elixir itself does not prevent the developers from introducing features that are hard to scale or require additional work, but the programming model and the OTP make horizontal scaling much easier.

Energy efficiency

It is a well-known fact that resource and energy usage are highly correlated metrics. However, there is another, often overlooked factor that contributes significantly to energy usage. The concurrency limit is the number of concurrent tasks an application can execute without having stability issues. Near the concurrency limit, applications begin to use the CPU excessively, therefore the overhead of context switches begins to matter a lot. Another consequence is the increased memory usage, caused by the growing number of tasks waiting for CPU time. Since frequent context switches are also memory intensive, we can safely say that applications become much less energy efficient near the concurrency limit.

Maintenance

Tackling concurrency issues is probably the hardest part of any maintenance task. We certainly collect metrics to see what is happening inside the application, but these metrics often fail to provide enough information to identify the root cause of concurrency problems. We have to trace the execution flow to get an idea of what’s going on inside. Profiling or debugging of such issues comes with a certain cost: using these tools may alter the performance behaviour of the system in a way that makes it hard to reproduce the issue or identify the root cause.

Due to the message-passing concurrency model, the code base of a typical concurrent Elixir application is less complex and free from resource-sharing-related implementation mistakes often poisoning Java code, eliminating the need for this kind of maintenance. Also, the BEAM VM is designed with traceability in mind, leading to lower performance cost of tracing the execution flow.

Dependencies

Most of the enterprise applications heavily depend on 3rd party libraries. In the Java ecosystem, even the component framework comes from a 3rd party, with its own dependencies on other 3rd party libraries. This creates a ripple effect that makes it hard to upgrade just one component of such a system, not to mention the backward incompatible changes potentially introduced by newer 3rd party components. Anyone who has tried to upgrade a fairly large Maven project could tell stories about this dependency nightmare.

The Elixir world is no different, but the number of required 3rd party libraries can be much smaller since the BEAM VM and the OTP provide a few useful things (like the component platform, asynchronous messaging, seamless horizontal scalability, supervision trees), functionality that is very often used and can only be found in 3rd party libraries for Java.

Let’s get more technical

As I mentioned before, I was not satisfied with other language comparisons as they are usually based on simplistic or artificial test cases, and wanted to create something that mimics a common, but easy-to-understand scenario, and then measure the performance and complexity of different implementations. Although real-world performance is rarely just a number, it is a composite of several metrics like CPU and memory usage, I/O and network throughput, I tried to quantify the performance using the processing time, and the time an application needs to finish a task. Another important aspect is code complexity since it contributes to development and maintenance costs. The size and complexity of the implementations also matter since these factors contribute to the development and maintenance costs.

Test scenario

Most real-world applications process data in a concurrent way. These data originate from a database or other kind of backends, microservice or from a 3rd party service. In any way, data is transferred via a network. In the enterprise world, the dominating way of network communication is via HTTP, often part of a REST workflow. That is the reason why I chose to measure how fast and reliable REST clients can be implemented in Elixir and Java, and in addition, how complex each implementation is.

The workflow starts with reading a configuration from a disk and then gathering data according to the configuration using several REST API calls. There are dependencies in between workflow steps, so several of them can’t be done concurrently, while the others can be done in parallel. The final step is to process the received data.

The actual scenario is to evaluate rules, where each rule contains information used to gather data from 3rd party services and predict utility stock prices based on historical weather, stock price and weather forecast data.

Rule evaluation is done in a concurrent manner. Both the Elixir and Java implementation are configured to evaluate 2 rules concurrently.

Implementation details

Elixir

The Elixir-based REST client is implemented as an OTP application. I tried to minimise the external dependencies since I’d like to focus on the performance of the language and the BEAM VM, and the more 3rd party libraries the application depends on, the more probable there’ll be some kind of a bottleneck.

The dependencies I use:

  • Finch: a very performant HTTP client
  • Jason: a fast JSON parser
  • Benchee: a benchmarking tool

Each concurrent task is implemented as a process, and data aggregation is done using asynchronous messaging. The diagram below shows the rule evaluation workflow.

There are altogether 8 concurrent processes in each task, one process is spawned for each rule, and then 3 processes are started to retrieve stock, historical weather and weather prediction data.

Java

The Java-based REST client is implemented as a standalone application. Since the Elixir application uses OTP, the fair comparison would be to use some kind of a component framework, like Spring or OSGi, since both are very common in the enterprise world. However, I decided not to use them, as they both would contribute heavily to the complexity of the application, although they wouldn’t change the performance profile much.

Dependencies:

There are two implementations of concurrent task processing. The first one uses two platform thread pools, for rule processing and for retrieving weather data. This might seem a bit naive, as this workflow could be optimised better, but please keep in mind that

  1. I wanted to model a generic workflow, and it is quite common that several thread pools are used for concurrent processing of various tasks.
  2. My intent was to find the right balance between implementation complexity and performance.

The other implementation uses Virtual Threads for rule processing and weather data retrieval.

The diagram below shows the rule evaluation workflow.

There are altogether 6 concurrent threads in each task, one thread is started for each rule, and then 2 threads are started to retrieve historical weather and weather prediction data.

Results

Hardware

Google Compute Node

  • CPU Information: AMD EPYC 7B12
  • Number of Available Cores: 8
  • Available memory: 31.36 GB

TasksElixirJavaPlatform ThreadsJavaVirtual Threads
3202.52 s2.52 s2.52 s
6402.52 s2.52 s2.52 s
12802.51 s2.52 s, 11% error2.52 s
25605.01 s2.52 s, 7 errors
51205.01 sHigh error rate
102405.02 s
204807.06 s

Detailed results

Elixir

  • Elixir 1.16.2
  • Erlang 26.2.4
  • JIT enabled: true
Concurrent tasks per minuteAverageMedian99th %Remarks
52.5 s2.38 s3.82 s
102.47 s2.38 s3.77 s
202.47 s2.41 s3.77 s
402.5 s2.47 s3.79 s
802.52 s2.47 s3.82 s
1602.52 s2.49 s3.78 s
3202.52 s2.49 s3.77 s
6402.52 s2.47 s3.81 s
12802.51 s2.47 s3.8 s
25605.01 s5.0 s5.17 s
38405.01 s5.0 s5.11 s
51205.01 s5.0 s5.11 s
102405.02 s5.0 s5.15 s
151205.53 s5.56 s5.73 s
204807.6 s7.59 s8.02 s

Java 21, Platform Threads

  • OpenJDK 64-Bit Server VM, version 21
Concurrent tasks per minuteAverageMedian99th %Remarks
52.5 s2.36 s3.71 s
102.54 s2.48 s3.69 s
202.5 s2.5 s3.8 s
402.56 s2.45 s3.84 s
802.51 s2.46 s3.8 s
1602.5 s2.5 s3.79 s
3202.52 s2.46 s3.8 s
6402.52 s2.48 s3.8 s
12802.52 s2.47 s3.8 s11% HTTP timeouts

Java 21, Virtual Threads

  • OpenJDK 64-Bit Server VM, version 21
Concurrent tasks per minuteAverageMedian99th %Remarks
52.46 s2.49 s3.8 s
102.51 s2.52 s3.68 s
202.56 s2.44 s3.79 s
402.53 s2.46 s3.8 s
802.52 s2.48 s3.79 s
1602.52 s2.49 s3.77 s
3202.52 s2.48 s3.8 s
6402.52 s2.49 s3.8 s
12802.52 s2.48 s3.8 s
25602.52 s2.48 s3.8 sErrors: 7 (HTTP client EofException)
3840N/AN/AN/ALarge amount of HTTP timeouts

Stability

Under high load, strange things can happen. Concurrency (thread contentions, races), operating system or VM-related (resource limits) and hardware-specific (memory, I/O, network etc.) errors may occur anytime. Many of them cannot be handled by the application, but the runtime usually can (or should) deal with them to provide reliable operation even in the presence of faults.

During the test runs, my impression was that the BEAM VM is superior in this task, in contrast to the JVM which entertained me with various cryptic error messages, like the following one:

java.util.concurrent.ExecutionException: java.io.IOException
        at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
        at esl.tech_shootout.RuleProcessor.evaluate(RuleProcessor.java:38)
        at esl.tech_shootout.RuleProcessor.lambda$evaluateAll$0(RuleProcessor.java:29)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.io.IOException
        at java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:586)
        at java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:123)
        at esl.tech_shootout.RestUtils.callRestApi(RestUtils.java:21)
        at esl.tech_shootout.StockService.stockSymbol(StockService.java:23)
        at esl.tech_shootout.StockService.stockData(StockService.java:17)
        at esl.tech_shootout.RuleProcessor.lambda$evaluate$3(RuleProcessor.java:37)
        ... 4 more
Caused by: java.nio.channels.ClosedChannelException
        at java.base/sun.nio.ch.SocketChannelImpl.ensureOpen(SocketChannelImpl.java:195)

Although in this case, I know the cause of this error, the error message is not very informative. Compare the above stack trace with the error raised by Elixir and the BEAM VM:

16:29:53.822 [error] Process #PID<0.2373.0> raised an exception
** (RuntimeError) Finch was unable to provide a connection within the timeout due to excess queuing for connections. Consider adjusting the pool size, count, timeout or reducing the rate of requests if it is possible that the downstream service is unable to keep up with the current rate.

    (nimble_pool 1.0.0) lib/nimble_pool.ex:402: NimblePool.exit!/3
    (finch 0.18.0) lib/finch/http1/pool.ex:52: Finch.HTTP1.Pool.request/6
    (finch 0.18.0) lib/finch.ex:472: anonymous fn/4 in Finch.request/3
    (telemetry 1.2.1) /home/sragli/git/tech_shootout/elixir_demo/deps/telemetry/src/telemetry.erl:321: :telemetry.span/3
    (elixir_demo 0.1.0) lib/elixir_demo/rule_processor.ex:56: ElixirDemo.RuleProcessor.retrieve_weather_data/3

This exception shows what happens when we mix different concurrency models:

Thread[#816,HttpClient@6e579b8-816,5,VirtualThreads]
 at java.base@21/jdk.internal.misc.Unsafe.park(Native Method)
 at java.base@21/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:269)
 at java.base@21/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1758)
 at app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:219)
 at app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:1139)


The Jetty HTTP client is a nice piece of code and very performant, but uses platform threads in its internals, while our benchmark code relies on virtual threads.

That’s why I had to switch from JDK HttpClient to Jetty:

Caused by: java.io.IOException: /172.17.0.2:60876: GOAWAY received
       at java.net.http/jdk.internal.net.http.Http2Connection.handleGoAway(Http2Connection.java:1166)
       at java.net.http/jdk.internal.net.http.Http2Connection.handleConnectionFrame(Http2Connection.java:980)
       at java.net.http/jdk.internal.net.http.Http2Connection.processFrame(Http2Connection.java:813)
       at java.net.http/jdk.internal.net.http.frame.FramesDecoder.decode(FramesDecoder.java:155)
       at java.net.http/jdk.internal.net.http.Http2Connection$FramesController.processReceivedData(Http2Connection.java:272)
       at java.net.http/jdk.internal.net.http.Http2Connection.asyncReceive(Http2Connection.java:740)
       at java.net.http/jdk.internal.net.http.Http2Connection$Http2TubeSubscriber.processQueue(Http2Connection.java:1526)
       at java.net.http/jdk.internal.net.http.common.SequentialScheduler$LockingRestartableTask.run(SequentialScheduler.java:182)
       at java.net.http/jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.run(SequentialScheduler.java:149)
       at java.net.http/jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:207)
       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
       at java.base/java.lang.Thread.run(Thread.java:1583)

According to the HTTP 2.0 standard, an HTTP server can send a GOAWAY response at any time (typically under high load, in our case, after about 2000 requests/min) to indicate connection shutdown. It is the client’s responsibility to handle this situation. The HttpClient implemented in the JDK fails to do that internally and it does not provide enough information to make the proper error handling possible.

Concluding remarks

As I expected, both the Elixir and Java applications performed well in low concurrency settings, but the Java application became less stable as the number of concurrent tasks increased, while Elixir exhibited rock-solid performance with minimal slowdown.

The BEAM VM was also superior in providing reliable operation under high load, even in the presence of faults. After about 2000 HTTP requests per second, timeouts were inevitable, but they didn’t impact the stability of the application. On the other hand, the JVM started to behave very erratically after about 1000 (Platform Threads-based implementation) or 3000 (Virtual Threads-based implementation) concurrent tasks.

Code complexity

There are a few widely accepted complexity metrics to quantify code complexity, but I think the most representative ones are the Lines of Code and Cyclomatic Complexity.

Lines of Code, more precisely the Source Lines of Code (SLoC for short) quantifies the total number of lines in the source code of an application. Strictly speaking, it is not very useful as a complexity measure, but it is a good indicator of how much effort is needed to look through a particular codebase. Source Lines of Code is measured by calculating the total number of lines in all source files, not including the dependencies and configuration files.

Cyclomatic Complexity (CC for short) is more technical as it measures the number of independent execution paths through the source code. CC measurement works in a different way for each language. Cyclomatic Complexity of the Elixir application is measured using Credo, and CC of the Java application is quantified using the CodeMetrics plugin of IntelliJ IDEA.

These numbers show that there is a clear difference in complexity even between such small and simple applications. While 9 is not a particularly high score for Cyclomatic Complexity, it indicates that the logical flow is not simple. It might not be concerning, but what’s more problematic is that even the most basic error handling increases the complexity by 3.

Conclusion

These results might paint a black-and-white picture, but keep in mind that both Elixir and Java have their advantages and shortcomings. If we are talking about CPU or memory-intensive operations in low concurrency mode, Java is the clear winner thanks to its programming model and the huge amount of optimisation done in the JVM. On the other hand, Elixir is a better choice for highly concurrent, available and scalable applications, not to mention the field of fault-tolerant systems. Elixir also has the advantage of being a more modern language with less syntactic clutter and less need to write boilerplate code.

The post Comparing Elixir vs Java  appeared first on Erlang Solutions.

by Attila Sragli at May 14, 2024 09:39

May 09, 2024

Erlang Solutions

A Comprehensive Guide to Elixir vs Ruby

Deciding what programming language is best for your long-term business strategy is a difficult decision. Suppose you’re tossing the coin between Elixir and Ruby, or considering making a shift from one to the other. In that case, you probably have a lot of questions about both languages, which we will compare for you in the latest Elixir vs Ruby guide.

Let’s explore the advantages and disadvantages of each language, as well as their optimal use cases and other key points, providing you with a clearer insight into both. Elixir is well-known for its scalability and fault tolerance, making it the best for real-time applications and systems needing high concurrency. Meanwhile, Ruby’s strength lies in its syntax and robust framework. Each language has its ideal use cases, and we’ll take a closer into them throughout the article.

Let’s explore the advantages and disadvantages of each language, as well as their optimal use cases and other key points, providing you with a clearer insight into both.

The pros of Elixir

If you’re considering migrating from Elixir to Ruby, you’ll undoubtedly be looking into its benefits and some key advantages it has over other languages. So let’s jump into some of its most prominent features.

Built on the BEAM

As mentioned, Elixir operates on the Erlang virtual machine (BEAM). It has a long history as one of the oldest VMs in IT history and remains widely used. The Erlang VM BEAM is ideal for managing and building systems with concurrent connections. 

Immutable data

A major advantage of Elixir is its support for immutable data, which simplifies code understanding. Elixir ensures that data is unchanged once it has been defined, enhancing code reliability by preventing unexpected changes to variables, and making for easier debugging.

Top performance

Elixir offers amazing performance. Phoenix framework is the most popular web development framework in Elixir, boasting remarkable speed and response times (a matter of milliseconds). While Rails isn’t a slow system either, Elixir’s performance just edges it out, making it a superior choice. We’ve previously made the case for Elixir’s great performance making it one of the fastest programming languages in our previous post.

Parallelism

Parallel systems often have latency and responsiveness challenges due to how much computer power is required for a single task. But Elixir addresses this with its very clever process scheduler, which proactively reallocates control to different processes.  

So even under heavy loads, a slow process isn’t able to significantly impact the overall performance of an Elixir application. This capability ensures low latency, a key requirement for modern web applications.

Highly fault-tolerant

In most programming languages, when a bug is identified in one process, it crashes the whole application. But Elixir handles this differently. It has unmatched fault tolerance. 

A fan favourite of the language, Elixir inherits Erlang’s “let it crash” philosophy, allowing processes to restart after a critical failure. This eliminates the need for complex recovery strategies.

Distributed concurrency

Elixir supports code concurrency, allowing you to run concurrent connections on a single computer and multiple machines. You can learn about the importance of building scalable and concurrent applications in Elixir programming language in this post.

Scalability

Elixir gets the most out of a single machine perfect for systems or applications that need to scale or maintain traffic. Thanks to its architecture, there’s no need to add servers to accommodate demand continuously.

So what is Elixir?

Elixir, as stated on the official website Elixir-lang.org, describes itself as a ‘dynamic and functional language for creating scalable and maintainable applications.’ It is a great choice for any situation where scalability, performance and productivity are priorities, particularly within IoT endeavours and web applications.

Elixir runs on the BEAM virtual machine, originating from Erlang’s virtual machine (VM). It is well known for managing fault-tolerant, low-latency distributed systems. Created in 1986 by the Ericsson company, Erlang was designed to address the growing demands within the telecoms industry.

It was later released as free and open-source software in 1998 and has since grown in popularity thanks to the demand for concurrent services.  If you would like a more detailed breakdown explaining the origins and current state of the Elixir programming language, check out our “What is Elixir” post in full.

The pros of Ruby

Now let’s explore the benefits that Ruby has to offer. There are massive advantages for developers, from its expansive library ecosystem to its user-friendly syntax and supportive community.

Huge ecosystem

Not only is Ruby a very user-friendly programming language, but it also boasts a vast library ecosystem. Whatever feature you want to implement, there’s likely something available to help you develop swift applications.

Easy to work with

The founder of Ruby’s aim was to make development a breeze and pleasant for users. For this reason, Ruby is straightforward, clean and has an easily understandable syntax. This makes for very easy and productive development, which is why it remains such a popular choice with developers.

Helpful, vibrant community

The Ruby community is a vibrant one that thrives with the consistent publishing of readily available solutions that are open to the public, like their ever-popular Ruby community page. This environment is very advantageous for new developers, who can easily seek assistance and valuable solutions online.

Commitment to standards

Ruby offers strong support for web standards across all aspects of an application, from its user interface to data transfer.

When building an application with Ruby, developers adhere to already established software design principles such as  “coding by convention,” “don’t repeat yourself,” and the “active record pattern.”

So why are all of these points considered so advantageous to Ruby?

Firstly, it simplifies the learning curve for beginners, designed to enhance the professional experience. It also lends itself to better code readability, which is great for collaboration and developers and finally, it reduces the amount of code needed to implement features.

What is Ruby?

Ruby stands out as a highly flexible programming language. Developers who code in Ruby are able to make changes to its functionality. Unlike compiled languages like C or C++, Ruby is an interpreted language, similar to Python.

But unlike Python which focuses on a singular, definitive solution for every problem, Ruby projects try to take on multiple problem-solving approaches. Depending on your project,  this approach has pros and cons. 

One hallmark of Ruby is its user-friendly nature. It hides a lot of intricate details from the programmer, making it much easier to use compared to other popular languages. But it also means that finding bugs in code can be harder. 

There is a major convenience factor to coding in Ruby. Any code that you write will run on any major operating system such as macOR, Windows, and Linux, without having to be ported.

Elixir v Ruby: Key differences

There are some significant differences between the two powerhouse languages. While Elixir and Ruby are both versatile, dynamic languages, unlike Ruby, Elixir code undergoes ahead-of-time compilation to Erlang VM (virtual machine) bytecode, which enhances its single-core performance substantially. Elixir’s focus is on code readability and expressiveness, while its robust macro system facilitates easy extensibility.

Elixir and Ruby’s syntax also differ in several ways. For instance, Elixir uses pipes (marked by |> operator) to pass the outcome of one expression as the initial argument to another function, while Ruby employs “.” for method chaining.

Also, Elixir provides explicit backing for immutable data structures, a feature not directly present in Ruby. It also offers first-rate support for typespecs, a capability lacking in Ruby.

Elixir vs Ruby: A history

To gain a better understanding of the frequent Ruby and Elixir comparisons, let’s take it back to the 90’s when Ruby was created by Yukihiro Matsumoto. He combined the best features of Small, Perl, Eiffel, Ada, Lip and Smalltalk languages to simplify the tasks of developers. But Ruby’s popularity surged with the release of the open-source framework Ruby on Rails.

This launch proved to be revolutionary in the world of web development, making code tasks achievable in a matter of days instead of months. As one of the leading figures on the Rails Core Team, Jose Valim recognised the potential for evolution within the Ruby language.

In 2012, Elixir was born- a functional programming language, built on the Erlang virtual machine (VM). The aim of Elixir was to create a language with the friendly syntax of Ruby while boasting fault tolerance, concurrency capabilities and a commitment to developer satisfaction.

The Elixir community also has Phoenix, an open-source framework from Phoenix’s creator Chris McCord. Working with Jose Valim and implementing the core values from Ruby on Rails perfected a much more effective framework for the Elixir ecosystem.

Best use for developers

Elixir is a great option for developers who want the productivity of Ruby and the scalability of Elixir. It also performs just as well as Ruby for Minimum Viable Products (MVPs) and startups for larger applications, while demonstrating robust scalability for extensive applications.

For companies who want swift delivery without sacrificing quality, Elixir works out as a great overall choice.

Comparing Ruby v Elixir

Exploring the talent pool for Elixir and Ruby

Elixir is a newer language than Ruby and has a smaller pool of developers. But that doesn’t stop it from slowly becoming one of the most popular programming languages, ranking second most popular in 2022.

Elixir v Ruby

The Stack Overflow Survey 2022

Also, let’s not forget it’s also a functional language, and functional programming typically demands a different way of thinking compared to object-oriented programming. 

As a result, Elixir developers tend to have more experience and understanding of programming concepts. And the Elixir community is also rapidly growing. Those who are familiar with Ruby commonly make the switch to Elixir.

Although Elixir developers might be more difficult to find, once you do, they are worth their weight. 

Who is using Elixir and Ruby?

Let’s take a look at some highly successful companies that have used Ruby and Elixir:

Elixir

Discord: Uses Elixir for its real-time messaging infrastructure, benefiting from Elixir’s concurrency and fault tolerance.

Pinterest: Takes advantage of Elixir’s scalability and fault-tolerance features.

Bleacher Report: Bleacher Report, a sports news website, utilizes Elixir for its backend services, including real-time updates and notifications.

Moz: Uses Elixir for its backend services, benefiting from its concurrency model and fault tolerance.

Toyota Connected: Leverages Elixir for building scalable and fault-tolerant backend systems for connected car applications.

Ruby

Airbnb: Uses Ruby on Rails for its web platform, including features like search, booking, and reviews.

GitHub: Is built primarily using Ruby on Rails.

Shopify: Relies on Ruby on Rails for its backend infrastructure.

Basecamp: Built using Ruby on Rails.

Kickstarter: Uses Ruby on Rails for its website and backend services.

So, what to choose?

Migrating or simply deciding between programming languages presents an opportunity for enhancing performance, scalability, and robustness. But it is a journey. One that requires careful planning and execution and achieve the best long-term results for your business.

While the Ruby community offers longevity, navigating outdated solutions can be a challenge. Nonetheless, the overlap of the Ruby and Elixir communities fosters a supportive environment for transitioning from one to the other. Elixir provides a learning curve that may deter some, but for developers seeking typed languages and parallel computing benefits, it is invaluable.

If you’re already working with existing Ruby infrastructure, incorporating Elixir to address scaling and reliability issues is a viable option. The synergies between the two languages promote a seamless transition. 

Ultimately, while Ruby remains a solid choice, the advantages of Elixir make it a compelling option worth considering for future development and business growth. You can learn more about our Elixir offering on our Elixir consulting page, or by contacting our team directly.

The post A Comprehensive Guide to Elixir vs Ruby appeared first on Erlang Solutions.

by Erlang Solutions Team at May 09, 2024 10:21

May 05, 2024

The XMPP Standards Foundation

The XMPP Newsletter April 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of April 2024.

XSF Announcements

If you are interested to join the XMPP Standards Foundation as a member, please apply until 19th May 2024!.

XMPP and Google Summer of Code 2024

The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and are in the community bonding phase now:

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • XMPP Sprint in Berlin: On Friday, 12th to Sunday, 14th of July 2024.
  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

Articles

Software News

Clients and Applications

Servers

Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEP was proposed this month.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 0.7.0 of XEP-0333 (Displayed Markers)
    • Change title to “Displayed Markers”
    • Bring back Service Discovery feature (dg)
  • Version 0.4.1 of XEP-0440 (SASL Channel-Binding Type Capability)
    • Recommend the usage of tls-exporter over tls-server-end-point (fs)
  • Version 0.2.1 of XEP-0444 (Message Reactions)
    • fix grammar and spelling (wb)
  • Version 1.0.1 of XEP-0388 (Extensible SASL Profile)
    • Fixed typos (md)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • XEP-0398: User Avatar to vCard-Based Avatars Conversion

Stable

  • Version 1.0.0 of XEP-0386 (Bind 2)
    • Accept as Stable as per Council Vote from 2024-04-02. (XEP Editor (dg))
  • Version 1.0.0 of XEP-0388 (Extensible SASL Profile)
    • Accept as Stable as per Council Vote from 2024-04-02. (XEP Editor (dg))
  • Version 1.0.0 of XEP-0333 (Displayed Markers)
    • Accept as Stable as per Council Vote from 2024-04-17. (XEP Editor (dg))
  • Version 1.0.0 of XEP-0334 (Message Processing Hints)
    • Accept as Stable as per Council Vote from 2024-04-17 (XEP Editor (dg))

Deprecated

  • No XEP deprecated this month.

Rejected

  • XEP-0360: Nonzas (are not Stanzas)

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

May 05, 2024 00:00

May 02, 2024

Erlang Solutions

Naming your Daemons

Within Unix systems, a daemon is a long-running background process which does not directly interact with users. Many similar processes exist within a BEAM application. At times it makes sense to name them, allowing sending messages without requiring the knowledge of their process identifier (aka PID). There are several benefits to naming processes, these include: 

  1. Organised processes: using a descriptive and meaningful name organises the processes in the system. It clarifies the purpose and responsibilities of the process.
  2. Fault tolerance: when a process is restarted due to a fault it has to share its new PID to all callees. A registered name is a workaround to this. Once the restarted process is re-registered there is no additional action required and messages to the registered process resume uninterrupted.
  3. Pattern implementation: a Singleton, Coordinator, Mediator or Facade design pattern commonly has one registered process acting as the entry point for the pattern.

Naming your processes

Naturally, both Elixir and Erlang support this behaviour by registering the process. One downside with registering is requiring an atom. As a result, there is an unnecessary mapping between atoms and other data structures, typically between strings and atoms. 

To get around this is it a common pattern to perform the registration as a two-step procedure and manage the associations manually, as shown in below:


#+begin_src Elixir
{:ok, pid} = GenServer.start_link(Worker, [], [])
register_name(pid, "router-thuringia-weimar")

pid = whereis_name("router-thuringia-weimar")
GenServer.call(pid, msg)

unregister_name("router-thuringia-weimar")
#+end_src


Figure 1

Notice the example uses a composite name: built up from equipment type, e.g. router, state, e.g. Thuringia, and city, e.g. Weimar. Indeed, this pattern is typically used to address composite names and in particular dynamic composite names. This avoids the issue of the lack of atoms garbage collection in the BEAM.

As a frequently observed pattern, both Elixir and Erlang offer a convenient method to accomplish this while ensuring a consistent process usage pattern. In typical Elixir and Erlang style, this is subtly suggested in the documentation through a concise, single-paragraph explanation.

In this write- up, we will demonstrate using built-in generic server options to achieve similar behaviour.

Alternative process registry

According to the documentation, we can register a GenServer into an alternative process registry using the via directive.

 The registry must provide the following callbacks:

register_name/2, unregister_name/1, whereis_name/1, and send/2.

As it happens there are two commonly available applications which satisfy these requirements: gproc and Registry. gproc is an external Erlang library written by Ulf Wiger, while Registry is a built-in Elixir library.

gproc is an application in its own right, simplifying using it. It only needs to be started as part of your system, whereas Registry requires adding the Registry GenServer to your supervision tree. 

We will be using gproc in the examples below to address the needs of both Erlang and Elixir applications. 

To use gproc we have to add it to the project dependency.

Into Elixir’s mix.exs:

#+begin_src Elixir
  defp deps do
    [
      {:gproc, git: "https://github.com/uwiger/gproc", tag: "0.9.1"}
    ]
  end
#+end_src

Figure 2

Next, we change the arguments to start_link, call and cast to use the gproc alternative registry, as listed below:

#+begin_src Elixir :noweb yes :tangle worker.ex
defmodule Edproc.Worker do
  use GenServer

  def start_link(name) do
    GenServer.start_link(__MODULE__, [], name: {:via, :gproc, {:n, :l, name}})
  end

  def call(name, msg) do
    GenServer.call({:via, :gproc, {:n, :l, name}}, msg)
  end

  def cast(name, msg) do
    GenServer.cast({:via, :gproc, {:n, :l, name}}, msg)
  end

  <<worker-gen-server-callbacks>>
end
#+end_src

Figure 3

As you can see the only change is using {:via, :gproc, {:n, :l, name}} as part of the GenServer name. No additional changes are necessary. Naturally, the heavy lifting is performed inside gproc.

The tuple {:n, :l, name} is specific for gproc and refers to setting up a “l:local n:name” registry. See the gproc for additional options.

Finally, let us take a look at some examples.

Example

In an Elixir shell:

#+begin_src Elixir
iex(1)> Edproc.Worker.start_link("router-thuringia-weimar")
{:ok, #PID<0.155.0>}
iex(2)> Edproc.Worker.call("router-thuringia-weimar", "hello world")
handle_call #PID<0.155.0> hello world
:ok
iex(4)> Edproc.Worker.start_link({:router, "thuringia", "weimar"})
{:ok, #PID<0.156.0>}
iex(5)> Edproc.Worker.call({:router, "thuringia", "weimar"}, "reset-counter")
handle_call #PID<0.156.0> reset-counter
:ok
#+end_src

Figure 4

As shown above, it is also possible to use a tuple as a name. Indeed, it is a common pattern to categorise processes with a tuple reference instead of constructing a delimited string.

Summary

The GenServer behaviour offers a convenient way to register a process with an alternative registry such as gproc. This registry permits the use of any BEAM term instead of the usual non-garbage collected atom name enhancing the ability to manage process identifiers dynamically. For Elixir applications, using the built-in Registry module might be a more straightforward and native choice, providing a simple yet powerful means of process registration directly integrated into the Elixir ecosystem.

Appendix

#+NAME: worker-gen-server-callbacks
#+BEGIN_SRC Elixir
  @impl true
  def init(_) do
    {:ok, []}
  end

  @impl true
  def handle_call(msg, _from, state) do
    IO.puts("handle_call #{inspect(self())} #{msg}")
    {:reply, :ok, state}
  end

  @impl true
  def handle_cast(msg, state) do
    IO.puts("handle_cast #{inspect(self())} #{msg}")
    {:noreply, state}
  end
#+END_SRC

Figure 5

The post Naming your Daemons appeared first on Erlang Solutions.

by Tee Teoh at May 02, 2024 13:46

April 25, 2024

Erlang Solutions

Technical debt and HR – what do they have in common?

At first glance, it may sound absurd. Here we have technical debt, a purely engineering problem, as technical as it can get, and another area, HR, dealing with psychology and emotions, put into one sentence. Is it possible that they are closely related? Let’s take it apart and see.

Exploring technical debt

What is technical debt, anyway? A tongue-in-cheek definition is that it is code written by someone else. But there is more to it – it is code written years ago, possibly by someone who has left the company. Also, every major piece of software is written incrementally over many years. Even if it started with a very detailed, well-thought-through design, there inevitably came plenty of modifications and additions which the original design could not easily accommodate.

Your predecessors sometimes had to cut corners and bend over backwards to achieve desired results in an acceptable amount of time. Then they moved on, someone else took over and so on.

What you now have is a tangled mess, mixing coding styles, techniques and design patterns, with an extra addition of ad-hoc solutions and hacks. You see a docstring like “temporary solution, TODO: clean up”, you run git blame and it is seven years old. It happens to everybody.

The business behind technical debt

Technical debt is a business issue. You can read about it in more detail in our previous post.

Source: Medium

The daily tasks of most developers are fixing bugs and implementing new features in existing code. The more messy and convoluted the code is, the more time it takes every time one has to read it and reason about it. And it is real money: according to McKinsey Report, this burden amounts to 20%-40% of an average business’ technology stack. Engineers are estimated to spend up to 50% of their time struggling with it.

So what can businesses do to get their code in check? Here are some suggestions:

  • Taking a step back 
  • Reassessing the architecture and techniques 
  • Making more informed choices 
  • Rewriting parts of the code to make it consistent and understandable, removing unused code and duplications

Unfortunately, this is very rarely done, since it does not bring any visible improvements to the product – clients are not interested in code quality, they want software that does its job. Improving the code costs real money, while the increase in developer productivity is impossible to quantify.

Technical debt also has another property – it is annoying. And this brings us nicely to the second topic.

Happy HR, Happier devs

What is HR about? In part, it is about the well-being of employees. Every employer wants good people to stay in the company. The most valuable employee is someone who likes their job and feels good about the place. HR departments go to great lengths to achieve this.

But, you can buy new chairs and phones, decorate the office, buy pizza, organise board games evenings – all this is mostly wasted if the following morning your devs show up in their workplace only to say “Oh no, not this old cruft again”, embellishing that statement with a substantial amount of profanities.

Now I tell you this: Nothing makes developers happier than allowing them to address their pain points. Ask them what they hate the most about the codebase and let them improve it, the way they choose to, at their own pace. They will be delighted.

You may ask how I know. Firstly, I’m a dev myself. Secondly, I’m fortunate enough to be currently working for a company that took the steps and did exactly that:

Step 1: Set up a small “tech debt” team

Step 2: Collected improvement proposals from all developers

Step 3: Documented them

Step 4: Defined priorities

Currently, the technical debt team or the proposers themselves are gradually putting these proposals into action, one by one. The code is getting better. We are becoming more productive. And if we’re happy, isn’t HR?

Calling upon the compassionate and proactive HR professionals out there: talk to your CTOs, tell them you all are after the same thing – you want these frustrated, burned-out coders happy, enthusiastic and more productive, and that you have an idea of how to achieve this.

Chances are they will be interested.

The post Technical debt and HR – what do they have in common? appeared first on Erlang Solutions.

by Bartek Gorny at April 25, 2024 09:17

ProcessOne

ejabberd Docs now using MkDocs

The ejabberd Docs website did just get a major rework: new content management system, reorganized navigation, improved markdown, and several improvements!

Brief documentation timeline

ejabberd started in November 2002 (see a timeline in the ejabberd turns 20 blog post). And the first documentation was published in January 2003, using LaTeX, see Ejabberd Installation and Operation Guide. That was one single file, hosted in the ejabberd CVS source code repository, and was available as a single HTML file and a PDF.

As the project grew and got more content, in 2015 the documentation was converted from LaTeX to Markdown, moved from ejabberd repository to a dedicated docs.ejabberd.im git repository, and published using a Go HTTP server in docs.ejabberd.im, see an archived ejabberd Docs site.

New ejabberd Docs site

Now the ejabberd documentation has moved to MkDocs+Material, and this brings several changes and improvements:

Site and Web Server:

  • Replaced Go site with MkDocs
  • Material theme for great features and visual appeal, including light/dark color schemes
  • Still written in Markdown, but now using several MkDocs, Material and Python-Markdown extensions
  • The online site is built by GitHub Actions and hosted in Pages, with smaller
    automatic deployment time
  • Offline reading: the ejabberd Docs site can be downloaded as a PDF or zipped HTML, see the links in home page

Navigation

  • Major navigation reorganization, keeping URLs intact so old links still work (only Install got some relevant URL changes)
  • Install section is split into several sections: Containers, Binaries, Compile, …
  • Reorganized the Archive section, and now it includes the corresponding Upgrade notes
  • Several markdown files from the ejabberd and docker-ejabberd repositories are now incorporated here

Content

  • Many markdown visual improvements, specially in code snippets
  • Options and commands that were modified in the last release will show a mark, see for example API Reference
  • Version annotations are shown after the corresponding title, see for example sql_flags
  • Modules can have version annotations, see for example mod_matrix_gw
  • Links to modules, options and API now use the real name with _ character instead of - (compare old #auth-opts with #auth_opts). The old links are still supported, no broken links.
  • Listen Modules section is now better organized
  • New experimental ejabberd Developer Livebook

So, please check the revamped ejabberd Docs site, and head to docs.ejabberd.im git repository to report problems and propose improvements.

The post ejabberd Docs now using MkDocs first appeared on ProcessOne.

by Badlop at April 25, 2024 08:54

April 21, 2024

Remko Tronçon

Generating coverage reports & badges for SwiftPM apps

The age plugin for Apple’s Secure Enclave is a small, platform-independent Swift CLI app, built using the Swift Package Manager. Because it does not use Xcode for building, you can’t use the Xcode IDE for collecting and browsing test coverage of your source files. I therefore wrote a small (self-contained, dependencyless, cross-platform) Swift script to transform SwiftPM’s raw coverage data into annotated source code, together with an SVG badge to put on your project page.

by Remko Tronçon at April 21, 2024 00:00

April 18, 2024

Erlang Solutions

Blockchain Tech Deep Dive| Meaning of Ownership

Welcome to part three of our ‘Making Sense of Blockchain’ blog post series. Here we’ll explore how our attitudes to ownership are changing and how this relates to the value we attach to digital assets in the blockchain space. You can check out ‘Innovating with Erlang and Elixir’ here if you missed part two of the series.

Digital Assets: Ownership in the era of Blockchain

While physical goods contain an abstract element: the design, the capacity to model, package and make it appealing to the owners or consumers. Digital assets have a far stronger element of abstraction which defines their value. In contrast, their physical element is often negligible and replaceable (e.g. software can be stored on disk, transferred or printed). These types of assets typically stimulate our intellect and imagination.

Digital goods have a unique quality in that they can be duplicated effortlessly and inexpensively. They can exist in multiple forms across various platforms, thanks to the simple way we store them using binary code. They can be recreated endlessly from identical copies. This is a feature that dramatically influences how we value digital assets.  Because replicas are so easy to make, their copies or representations don’t hold value, but the original digital creation itself. This principle is a cornerstone of blockchain technology, with its hash lock feature safeguarding the integrity of digital assets.

If digital items are used correctly, the capacity to clone a digital item can increase confidence that it will exist indefinitely, which keeps its value steady. However, the immutable and perpetual existence of digital goods isn’t guaranteed forever.

They are dependent on a physical medium (e.g. hard disk storage), that could be potentially altered, degraded or become obsolete over time. 

A blockchain, like the one used in the Bitcoin network, is a way to replicate and reinforce digital information via Distributed Ledger Technology (DLT). 

An example of the DLT network

Distributed Ledger Technology lets users record, share, and synchronise data and transactions across a distributed network comprising of many participants. 

It includes mechanisms to repair issues, should data become corrupted due to hard disk failure or a malicious attack.

However, as genetic evolution suggests, clones with the same characteristics can all die out by the introduction of an actor that makes the environment unfit for survival. So it might be sensible to introduce different types of ledgers to keep data safe on various physical platforms, increasing the likelihood of survival of information.

The evolution of services and their automation

Now let’s consider how we have started to attach value to services and how we are becoming increasingly demanding about their performance and quality.

Services are a type of abstract value often traded on the market. They involve actions defined by contracts that lead to some kind of change. This change can apply to physical goods, digital assets, and other services themselves or people. What we trade is the potential to exercise a transformation, which in some instances might have been applied already. For example, a refined product like oil that has already been changed from its original raw state.

As transformations become more automated and the human element decreases,  services are gradually taking the shape of automated algorithms, which are yet another form of digital assets. Take smart contracts, for example, a rapid-growth industry projected to grow from USD 1.9 Billion in 2023 to USD 9.2 Billion by 2032, according to Market Research Future.

Smart Contracts Market Projection Overview

But it’s important to state that an algorithm alone isn’t enough to apply digital transformation, we also require an executor, like a physical or virtual machine.

Sustainability and access to resources

Stimulation of the intellect and/or imagination isn’t the only motivator that explains the increasing interest in digital goods and ultimately their rising market value. Physical goods are known to be quite expensive to handle. To create, trade, own and preserve them, there is a significant expenditure required for storage, transport, insurance, maintenance, extraction of raw materials etc.

There’s a competitive and environmental cost involved, making obtaining physical resources inherently difficult to scale and sometimes costly- especially in densely populated urban areas. As a result, people are motivated to possess and exchange digital goods and services.

The high power consumption required by the Bitcoin network’s method of consensus would potentially negate these environmental benefits. Although power consumption is a concern, it should be remembered that blockchain technology can act as a force for good, being used for environmentally beneficial projects. 

A great example is the work being done by dClimate, a decentralised climate information ecosystem making it easier for businesses to find and utilise essential environmental information that could impact their sector. These important decisions in turn provide information on: 

  • Where businesses can build infrastructure
  • Where they can manage water resources
  • How businesses can protect vulnerable communities

However, some of these activities (such as those requiring non-physical effort, like stock market trading, and legal or accounting services) are best suited for significant cost reduction through algorithmic automation  (assuming that the high carbon footprint required to drive the ‘Proof of Work’ consensus mechanism used in many DLT ecosystems can be avoided).

Barriers to acceptance of digital assets

While it is sensible to forecast a significant expansion of the digital assets market in the coming years, it is also true that, at present, there are still many psychological barriers to overcome to get broader traction in the market.

The main challenge relates to trust. A buyer wants some assurance that traded assets are genuine and that the seller owns them or acts on behalf of the owner. DLT provides a solid way to work out the history of a registered item without, interrogating a centralised trusted entity. Provenance and ownership are inferable and verifiable from several replicated ledgers while block sequences can help ensure there is no double spending or double sale taking place within a certain time frame.

Another challenge is linked to the meaning of ownership outside of the context of a specific market. Take the closure of Microsoft’s ebook store. Microsoft’s decision to pull out of the ebook market, presumably motivated by a lack of profit, could have an impact on all ebook purchases that were made on that platform. The perception of the customer was obviously that owning an ebook was the same as owning a physical book. 

What Microsoft might have contractually agreed through its End-User License Agreement (EULA), however, is that this is true only within the contextual existence of its platform.

There is a push, in this sense, towards forms of ownership that can break out from the restrictions of a specific market and be maintained in a broader context. Blockchain’s DLT in conjunction with smart contracts, that exist potentially indefinitely, can be used to serve this purpose allowing people to effectively retain their digital items’ use across multiple applications.

The transition to these new notions of ownership is particularly demanding when it comes to digital non-fungible assets. Meanwhile, embracing fungible assets, such as cryptocurrency, has been somewhat easier for customers who are already used to relating to financial instruments. 

This is probably because fungible assets serve the unique function of paying for something, while in the case of non-fungible assets, there is a range of functions that define their meaning in the digital or physical space.

What this will mean for blockchain adopters

In discussing the major emerging innovation that blockchain technology has influenced dramatically over the last few years, the ownership of digital assets, it is clear that what we are witnessing is a new era that is likely to revolutionise the perception of ownership and reliance on trusted and trustless forms of automation. This is driven by the need to increase interoperability, cost compression, sustainability, performance and customisation. For any business size in any industry, we’re ready to investigate, build and deploy your blockchain-based project on time and to budget. Let us know about your blockchain project here.

The post Blockchain Tech Deep Dive| Meaning of Ownership appeared first on Erlang Solutions.

by Erlang Solutions Team at April 18, 2024 08:59

April 15, 2024

Monal IM

ROS Security Audit

Radically Open Security (ROS) kindly performed a security audit of some parts of Monal.
Specifically they audited the usage of our XML query language and the implementations of SASL2, SCRAM and SSDP.

The results in a nutshell: no security issues found, read the full report here: Monal IM penetration test report 2024 1.0 .

April 15, 2024 00:00

April 13, 2024

Snikket

Snikket Android app temporarily unavailable in Google Play store [RESOLVED]

We initially shared this news on our social media page, thinking this was a temporary issue. But we’ve had no response from Google for several days, and want to explain the situation in more detail.

Update 16th April: Over a week after this began, Google have reinstated the Snikket app on the Play Store and everything works again. Thanks to everyone who gave us encouragement and support during this time! Feel free to read on for details of what happened.

Summary

We merged some changes from our upstream project, Conversations, and we submitted the new version to Google for review. Before responding, they removed the existing published version from the store. We have submitted a new version (on 10th April) that we believe should satisfy Google, but they have not yet published it or provided any feedback.

This means that it’s not currently possible for Android users to install the app using Google Play. We recommend that you install it via F-Droid instead.

Workaround for Android users

If you receive an invitation to Snikket, the Play Store link in the invitation will not work. The best course of action is to install the app using an open-source marketplace instead: F-Droid.

  1. Follow the instructions on f-droid.org to download and install F-Droid.
  2. Install Snikket using F-Droid.
  3. After the Snikket app is installed, open your Snikket invitation link again.
  4. Tap the ‘Open the app’ button.
  5. Follow the Snikket app’s instructions to set up your new Snikket account.

The full story

I’m Matthew, founder of Snikket and lead developer. This is the story of how we arrived at this situation with Google.

It all began when…

A few months ago, Snikket, along with a number of other XMPP apps, found our updates rejected by Google’s review team, claiming that because we upload the address book entries of users to our servers, we need a “prominent disclosure” of this within the app. The problem is… we don’t upload the user’s address book anywhere!

The app requests permission to read the address book. Granting this permission is optional, and the reason is explained before the permission is requested. If you grant the permission, the app has a local-only (no upload!) feature that allows you to “link” your XMPP contacts with your phone address book contacts, allowing you to unify things like contact photos. Contact information from your address book is never uploaded.

Many messaging apps, such as WhatsApp, Signal, and others, request access to your address book so they can upload them to their servers and determine who else you know that is using their service. Google have decided that’s what we’re doing, and they won’t accept any evidence that we’re not.

We don’t have telemetry in our app, but we assumed that this feature is probably not used by most people, so we decided to remove it from the Play Store version of the app rather than continue fighting with Google.

Amusingly, Google also rejected the update that removed the ‘READ_CONTACTS’ permission. Multiple times. It took an appeal before they revealed that they were rejecting the new version it because one of the beta tracks still had an older version with the READ_CONTACTS permission. Weird.

I fixed that, and submitted again. They rejected it again. This time they said that they required a test login for the app. Funny, because we already provided one long ago. I assumed the old test account was no longer working, so I made them a new one and resubmitted the app. They rejected it again with the same reason - saying we had not provided valid test account credentials.

“You didn’t provide an active demo/guest account or a valid username and password which we need to access your app.” – Google reviewers

The weird thing was, when I logged in to that account to test it, I saw that they had logged in and even sent some messages. So they were lying?!

We submitted an appeal with all the evidence that the account was working, and their reviewers had even logged in and used it successfully. After some time, they eventually responded that they wanted a second test account. Why couldn’t they just say that in the first place?!

After adding credentials for a second account, and using the Snikket circles features to ensure they could find each other easily, we resubmitted.

Rejected again.

This time the rejection reason was really the best one so far: they claimed the app was unable to send or receive messages. Rather funny for a messaging app that thousands of people use to send and receive messages daily.

Wait, a messaging app that can’t send messages?

Screenshot of Google&rsquo;s response: Issue found: Message functionality. The message sending and/or receiving functionality on your app doesn&rsquo;t work as expected. For example: Your app is not able to send outgoing messages. Your app is not able to receive incoming messages.

Once again, I logged into the test account we had provided to Google, and once again saw that they had successfully exchanged messages between their two test accounts. We submitted another appeal, with evidence.

Eventually they responded, clarifying that their complaint was specifically about the app when used with Android Auto, their smart car integration. I do not have such a car, and couldn’t find any contributor who had, but I found that Google provide an emulator that can run on a PC, so I set that up on my laptop and proceeded to test.

You won’t be surprised to learn at this point that the messaging functionality worked fine. We responded to the appeal, including a screencast I made of the messaging functionality working with Android Auto. They informed us that they were “unable to assist with the implementation” of their policies. Then at the end of their response, suggested that if we think the app is compliant, that we should resubmit it for review.

So we resubmitted the app, which by this point had already been rejected 7 times. We resubmitted it with no modification at all. We resubmitted the version they rejected. They emailed us later that day to say it was live.

How would I rate the developer experience of publishing an app with Google Play? An unsurprising 1 star out of 5. If I could give zero, I would.

The removal

But this was all a couple of months ago. Everything was fine. Until I merged some of the nice things Daniel has been working on recently in Conversations, the app upon which Snikket Android is based. We put the new version out for beta testing and everything was going fine - the app passed review, and a few weeks later with no major issues reported, we pushed the button to promote the new version from beta to live on the store.

On the 8th April we received an email from Google with the subject line:

“Action Required: Your app is not compliant with Google Play Policies (Snikket)”

I was ill this day, and barely working. For reasons that, if you have read this far, you will hopefully understand, I decided to take up this fight when I was feeling better. Confusingly, a couple of days later we received another email with the same subject. At this point I realised with horror that the first email was not about the new update - they had reviewed the current published version and decided to remove it entirely from the store.

With Snikket unavailable, anyone trying to add a new Android user to their Snikket instance (whether hosted or self-hosted) is going to have a hard time. This is not good.

Their complaint was that the privacy policy was not prominent enough within the app. They had previously hit Conversations with the same thing. Daniel had already put a link to the privacy policy in the main menu of that app and this was already in the update waiting for their review. They didn’t reject the update until a couple of days later, and for a different reason.

Unknown to me, Daniel had tried to re-add the ‘READ_CONTACTS’ permission to Conversations, hoping that with the new privacy policy link and other disclaimers in place, that would be enough. They had already rejected that, and he had removed the permission again. But he did this after I had already started testing the new beta release of Snikket. The order of events went something like this:

  • Daniel experimentally re-adds READ_CONTACTS permission to Conversations
  • I merge Conversations changes into Snikket, and begin beta testing
  • Conversations update gets rejected due to the permission, and Daniel reverts the READ_CONTACTS change
  • Without knowing of the Conversations rejection, I promote the Snikket beta to the store.
  • Google rejects the Snikket update

What’s interesting is that Google rejected only on the permission change. The contacts integration itself was still disabled in Snikket. This is strong evidence that Google just assumes that if you have the permission (and presumably network permission too) then of course you must be uploading the user’s contacts somewhere.

As soon as I realised the problem, I merged the new changes from Conversations and rushed a new upload to Google Play. However at the time of writing this, several days later, Snikket remains unavailable in the store and no feedback has been received from Google.

This is an unsustainable situation

During this period we have had multiple people sign up for hosted Snikket instances, and then cancel shortly after. This is almost certainly because a vital step of the onboarding process (installing the app) is currently broken. This is providing a bad experience for our users and customers, negatively affecting the project’s reputation and income.

We are grateful that alternatives such as F-Droid exist, and allow people access to open-source apps via a transparent process and without the tyranny of Google and their faceless unaccountable review team. We need to ensure these projects are supported, and continue to improve their functionality, usability and user awareness.

Finally, we also welcome the efforts that the EU has been working on with things like the Digital Markets Act, to help break up the control that Google’s (demonstrably) arbitrary review process has over the the success and failure of projects, and the livelihoods of app developers.

Google, are you there?

Screenshot of Google Play dashboard: Release summary: &ldquo;in review&rdquo;

by Snikket Team (team@snikket.org) at April 13, 2024 11:00

April 11, 2024

Erlang Solutions

Blockchain Tech Deep Dive | Innovating with Erlang and Elixir

We’re back with the latest in our Blockchain series, where we explore in-depth In our first post, we explored the Six Key Principles of Blockchain

In our latest post, we’re making the case for using Erlang,Elixir and the BEAMVM to power your blockchain project.

Blockchain and business needs

Building a robust and scalable blockchain presents many challenges that a research and development team typically needs to address. The often ambitious goals to drive decentralised consensus and governance require unconventional approaches to achieve extra performance and reliability.

Improved Transactions per Second (TPS) is the most common challenge that blockchain-related use cases expose. TPS as the name suggests, is a metric that indicates how quickly a network can execute transactions per second. It is inherently difficult to produce a distributed peer-to-peer (P2P) network that can register transactions into a single data structure.

Guaranteeing consensus while delivering high TPS throughput among a vast number of nodes active on the network is even more challenging. Also, the fact that most public blockchains need to operate in a non-trusted mode requires adequate mechanisms for validation, which implies that contextual data needs to be available on demand. A blockchain should also be able to respond to anomalies such as network connectivity loss, node failure and malicious actors.

All of the above is further complicated by the continuous growth of the blockchain data structure itself, which becomes problematic in terms of storage.

It is clear that, unless businesses are prepared to invest vast amounts of resources, they would benefit from a high-level language and technology that allows them to quickly prototype and amend their code.

The ideal technology should also:

  • Offer a strong network and concurrent capabilities
  • Have technology built with distribution in mind 
  • Offer a friendly paradigm for asynchronous computation
  • Not collapse under heavy load
  • Deliver when traffic exceeds capacity

The Erlang Beam VM (available also in the Elixir syntax) undoubtedly scores high on the above list of desirable features.

Erlang & Elixir’s strengths for blockchain

Fast development

The challenge: Blockchain technology is present in extremely competitive markets. According to Grandview Marketing Analysis report, The global blockchain technology market size was valued at USD 17.46 billion in 2023 and is expected to grow at a compound annual growth rate (CAGR) of 87.7% from 2023 to 2030. 

Grandview Marketing Analysis report

It is critical for organisations operating in them to be able to release new features in time to attract new customers and retain existing ones.

The answer: Both Erlang and Elixir are functional languages, operating at a very high level of abstraction which is ideal for fast prototyping and development of new ideas. By using these languages on top of the Beam VM, developers dramatically increase the speed to market when compared to other lower-level or object-oriented technologies.

Solutions developed in Erlang or Elixir also lead to a significantly smaller code base, which is easier to test and adapt to changes of direction. This is helpful when you proceed to fast prototyping new solutions and when you discover that amendments and upgrades are necessary, which is very typical in blockchain R&D activity. Both languages offer support for unit testing in their standard library. This enables developers to adopt Test Driven approaches ensuring the quality is preserved when modules and libraries get refactored. The common test framework also provides support for distributed tests and can be integrated with Continuous Integration frameworks like Jenkins. Both Erlang and Elixir shells let the programmer flesh out ideas fast and access running nodes for inspection.

Introspection

The challenge: To keep a competitive advantage in a fast-changing market, it is critical for organisations to promptly identify issues and opportunities by extracting relevant information about their running systems so that actions can be taken to upgrade them where appropriate.

The answer: Erlang and Elixir allow connection to an already running system and a status check. This is an extremely useful debugging tool, both in the development and production environment. Statuses of processes can be checked, and deadlocks in the live system can be analysed. Some various metrics and tools can show overload, bottlenecks and other key performance indicators. Enhanced introspection tools such as Erlang Solutions’ Wombat OAM are also helping with the identification of scalability issues when you run performance tests.

Networking

The challenge: Delivering a highly scalable and distributed P2P network is critical for blockchain enterprises. It is important to rely on stable battle-proven network libraries as reliable building blocks for exploiting use case-specific innovative approaches.

The answer: Erlang and Elixir come with strong and easy-to-manage network capabilities. There is a proven record of successful enterprises that rely on the BEAM VM’s networking strengths; including Adroll, WhatsApp, Bleacher Report, Klarna, Bet365 and Ericsson. Most of their use cases have strong analogies with the P2P networking that is required to deliver a distributed blockchain.

Combined with massive concurrency, the networking makes Erlang and Elixir ideal for server applications and means it can handle many clients. The binary and bitstring syntax makes parsing binary protocols particularly easy.

Massively concurrent

The challenge: There is a weakness afflicting Bitcoin and Ethereum where the computation of a block is competitive rather than collaborative. There is the opportunity to drive a collaborative concurrent approach: i.e. via sharding so that each actor can compute a portion of a block.

The answer: The BEAM VM powering Erlang and Elixir provides lightweight processes for applications. These are lightweight so that hundreds of thousands of them can run simultaneously. These processes do not share memory, communication is done over asynchronous messages (unlike goroutines) so there’s no need to synchronise them. The BEAM VM implementation also makes use of all of the available CPUs and cores. This makes Erlang and Elixir ideal for workloads that involve a huge amount of concurrency and consist of mostly independent workflows. This feature is especially useful in addressing the coordinated distribution of portions of work to compute a Merkle Tree of transactions.

High availability and resilience

The challenge: These are the requirements for every type of application and even more so for competitive and highly distributed blockchain networks. The communication and preservation of a state need to be as available as possible to avoid inconsistent states, network forks and disruptions experienced by the users.

The answer: The fault tolerance properties mentioned in the previous paragraph combined with built-in distribution leads to high availability even in cases of hardware malfunction. Erlang and Elixir have the built-in mnesia database system with the ability to replicate data over a cluster of nodes so if one node goes down, the state of the system is not lost.

Erlang and Elixir provide the supervisor pattern to handle errors.

An example of a Supervision Tree, used to build a hierarchical process structure 

Computing is done in multiple processes and if an error occurs and a process crashes, the supervisor is there to handle the situation, restart the computing or take some other measures. This pattern lets the actual code be cleaned as error handling can be implemented elsewhere. As processes are isolated, they do not share memory meaning errors are localised.

Built-in distribution

The challenge: This is highly relevant for trusted or hybrid networks where a central network takes authoritative decisions on top of a broader P2P network. Using the “out of the box” Erlang distribution and proven consistency approaches such as RAFT can be a quick win towards a fast prototype of a blockchain solution.

The answer: Erlang and Elixir provide out-of-the-box tools to run a distributed system. The message-handling functionalities of the system are transparent, so sending a message to a remote process is just as simple as sending it to a local process. There’s no need for convoluted IDLs, naming services or brokers.

Safety and Security

The challenge: Among the security features that both trusted and untrusted blockchain solutions strongly require, is the critical protection of access to the state memory, therefore reducing the exposure to a range of vulnerabilities.

The answer: Erlang and Elixir, just like many high-level languages, do not manipulate memory directly so applications written in Erlang and Elixir are effectively immune to many vulnerabilities like buffer overflows and return-oriented programming. Exploiting these vulnerabilities would require direct access to the memory, but the BEAM VM hides the memory from the application code.

While many business leaders are still trying to figure out how to put the technology to work for maximum ROI, most agree on two things:

  1. Blockchain unlocks vast value potential from optimised business operations.
  2. It’s here to stay.

Unlocking the potential of technology

Talking about blockchain implementation is no longer merely food for thought. Organisations should keep an eye on developments in blockchain tech and start planning how to best use this transformative technology, to unleash trapped value in key operational processes.

It’s clear – blockchain should be on every company’s agenda, regardless of industry.

If you want to start a conversation about engaging us for your project. Just drop the team a line.

The post Blockchain Tech Deep Dive | Innovating with Erlang and Elixir appeared first on Erlang Solutions.

by Erlang Solutions Team at April 11, 2024 09:10