Planet Jabber

May 30, 2024

Erlang Solutions

7 Key Blockchain Principles for Business

Welcome to the final instalment of our Blockchain series. Here, we are taking a look at the seven fundamental principles that make blockchain: Immutability, decentralisation 

‘workable’ consensus, distribution and resilience, transactional automation (including ‘smart contracts’), transparency and trust, and links to the external world.

For business leaders, understanding these core principles is crucial in harnessing the potential for building trust, spearheading innovation and driving overall business efficiency. 

If you missed the previous blog, feel free to learn all about the strengths of Erlang and Elixir in blockchain here.

Now let’s discuss how these seven principles can be leveraged to transform business operations.

Understanding the Core Concepts

In a survey conducted by EY, over a third (38%) of US workers surveyed said that blockchain technology is widely used within their businesses. A further 44% said the tech would be widely used within three years and 18% reported that they were still a few years away from being widely used within their business.

To increase the adoption of blockchain, it is key to understand its principles, how it operates, and the advantages it offers across various industries, such as financial services, retail, advertising and marketing, and digital health.

Immutability

In an ideal world, we would want to keep an accurate record of events and make sure it doesn’t degrade over time due to natural events, human error, or fraud. While physical items can change over time, digital information can be continuously corrected to prevent deterioration.

Implementing an immutable blockchain aims to maintain a digital history that remains unaltered over time. This is especially useful for businesses when it comes to assessing the ownership or the authenticity of an asset or to validate one or more transactions. In the context of legalities and business regulation, having an immutable record of transactions is key as this can save time and resources by streamlining these processes.

In a well-designed blockchain, data is encoded using hashing algorithms. This ensures that only those with sufficient information can verify a transaction. This is typically implemented on top of Merkle trees, where hashes of combined hashes are calculated.

Merkle tree or hash tree

In a well-designed blockchain, data is encoded using hashing algorithms. This ensures that only those with sufficient information can verify a transaction.

Challenges raised by business leaders

Legitimate questions can be raised by business leaders about storing an immutable data structure:

  • Scalability: How is the increasing volume of data handled once it surpasses ledger capacities?
  • Impact of decentralisation: What effect does growing data history and validation complexity have on decentralisation and participant engagement?
  • Performance verification: How does verification degrade as data history expands, particularly during peak usage?
  • Risk mitigation: How can we ensure consensus and prevent fragmented networks or unauthorised forks in transaction history?

Businesses face challenges in managing growing data, maintaining decentralisation, verifying transactions, and preventing risks in immutable data storage. Meeting regulations also add complexity, and deciding what data to store must consider sensitivity.

Addressing regulatory challenges

Compliance with GDPR introduces challenges, especially concerning the “right to be forgotten.” This is important because fines for breaches of GDPR are potentially very severe for non-compliance. The solutions introduced so far effectively aim at anonymising the information that enters the immutable on-chain storage process, while sensitive information is stored separately in support databases where this information can be deleted if required. 

The challenge lies in determining upfront what information is considered sensitive and suitable for inclusion in the immutable record.. A wrong choice has the potential to backfire at a later stage if any involved actor manages to extract or trace sensitive information through immutable history.

Immutability in blockchain technology provides a solution to preserving accurate historical records, ensuring the authenticity and ownership of assets, streamlining transaction validation, and saving businesses time and resources. But it also has its challenges, such as managing data volumes, maintaining decentralisation, and ensuring it is complying with regulations, for example, GDPR. Despite these challenges, businesses can leverage immutable blockchain technology to modernise record-keeping practices and uphold the integrity of their operations.

Decentralisation of control

Remember the 2008 financial crash? One of the reactions following this crisis was against over-centralisation. 

In response to the movement towards decentralisation, businesses have acknowledged the potential for innovation and adaptation. Embracing decentralisation not only aligns with consumer values of independence and democratic fairness, but it also presents opportunities for businesses to explore new markets and develop innovative products and services, as well as implement decentralised governance models within their own organisations.

Use cases for decentralisation

There are many ways in which businesses can leverage blockchain technology in order to embrace decentralisation and unlock new growth opportunities:

Decentralised finance (DeFi): DeFi platforms leverage blockchain technology to provide financial services without the need for intermediaries, such as banks or brokerages.

Supply chain management: By recording every transaction on a blockchain ledger, businesses can track the movement of goods from the point of origin to the end consumer. 

Smart contracts: Automatically enforce and execute contractual agreements when predefined conditions are met, also without the need for intermediaries. 

Tokenisation of assets: Businesses can turn their assets into digital tokens. This helps split ownership into smaller parts, making it easier to buy and sell, and allowing direct trading between people without intermediaries.

Identity management: Blockchain-based identity management systems offer secure and decentralised solutions. Businesses can use blockchain to verify the identity of customers, employees, and partners while giving people greater control over their data. 

Data management and monetisation: Blockchain allows for businesses to securely manage and monetise data by giving individuals control over their data, facilitating direct transactions between data owners and consumers. 

Further considerations of decentralisation

With full decentralisation, there is no central authority to resolve potential transactional issues. Traditional, centralised systems have well-developed anti-fraud and asset recovery mechanisms which people have become used to. 

Using new, decentralised technology places a far greater responsibility on the user if they are to receive all of the benefits of the technology, forcing them to take additional precautions when it comes to handling and storing their digital assets.

There has no point in having an ultra-secure blockchain if one then hands over one’s wallet private key to an intermediary whose security is lax: it’s like having the most secure safe in the world and then writing the combination on a whiteboard in the same room.

Decentralisation, security, and usability

For businesses, embracing decentralisation unlocks new opportunities while posing challenges in security and usability. Balancing these factors is key as businesses continue to navigate decentralised technologies, shaping the future of commerce and industry. 

Businesses must consider whether the increased level of personal responsibility associated with secure blockchain implementation is a price users are willing to pay, or if they will trade off some security for ease of use and potentially more centralisation.

Workable Consensus

As businesses are increasingly pushing towards decentralised forms of control and responsibility, it has since been brought to light the fundamental requirement to validate transactions without a central authority; known as the ‘consensus’ problem. The blockchain industry has seen various approaches emerge to address this, with some competing and others complementing each other.

There’s been a lot of attention on governance in blockchain ecosystems. This involves regulating how quickly new blocks are added to the chain and the rewards for miners (especially in proof-of-work blockchains). Overall, it’s crucial to set up incentives and deterrents so that everyone involved helps the chain grow healthily.

Besides serving as an economic deterrent against denial of service and spam attacks, Proof of Work (POW) approaches are amongst the first attempts to automatically work out, via the use of computational power, which ledgers/actors have the authority to create/mine new blocks. Similar approaches (proof of space, proof of bandwidth etc) have followed, but all of them are vulnerable to deviations from the intended fair distribution of control.

Proof of work algorithm

How do these methods benefit businesses? It gives them an edge by purchasing powerful hardware in bulk and running it in areas with cheaper electricity. This can help to outpace competitors in mining new blocks and gaining control, ultimately centralising authority. 

In response to the challenges brought on by centralised control and environmental concerns associated with traditional mining methods, alternative approaches such as Proof of Stake (POS) and Proof of Importance (POI) have emerged. These methods remove the focus from computing resources and tie authority to accumulated digital asset wealth or participant productivity. However, implementing POS and POI while mitigating the risk of power and wealth concentration could present significant challenges for developers and business leaders alike.

Distribution and resilience

Apart from decentralising authority, control and governance, blockchain solutions typically embrace a distributed peer-to-peer (P2P) design paradigm. 

This preference is motivated by the inherent resilience and flexibility that these types of networks have introduced and demonstrated, particularly in the context of file and data sharing. A centralised network, typical of mainframes and centralised services is exposed to a ‘single point of failure’ vulnerability as the operations are always routed towards a central node.

If the central node breaks down or is congested, all the other nodes will be affected by disruptions. In a business context, decentralised and distributed networks attempt to reduce the detrimental effects that issues occurring on a node might trigger on other nodes. In a decentralised network, the failure of a node can still affect several neighbouring nodes that rely on it to carry out their operations. In a distributed network the idea is that the failure of a single node should not impact significantly any other node. Even when one preferential/optimal route in the network becomes congested or breaks down entirely, a message can still reach the destination via an alternative route. 

This greatly increases the chance of keeping a service available in the event of failure or malicious attacks such as a denial of service (DOS) attack. Blockchain networks with a distributed ledger redundancy are known for their resilience against hacking, especially when it comes to very large networks, such as Bitcoin. In such a highly distributed network, the resources needed to generate a significant disruption are very high, which not only delivers on the resilience requirement but also works as a deterrent against malicious attacks (mainly because the cost of conducting a successful malicious attack becomes prohibitive).

Although a distributed topology can provide an effective response to failures or traffic spikes, businesses need to be aware that delivering resilience against prolonged over-capacity demands or malicious attacks requires adequate adapting mechanisms. While the Bitcoin network is well positioned, as it currently benefits from a high capacity condition (due to the historically high incentive to purchase hardware by third-party miners), this is not the case for other emerging networks as they grow in popularity. This is where novel instruments, capable of delivering preemptive adaptation combined with back pressure throttling applied to the P2P level, can be of great value.

Distributed systems are not new and, whilst they provide highly robust solutions to many enterprise and governmental problems, they are subject to the laws of physics and require their architects to consider the trade-offs that need to be made in their design and implementation (e.g. consistency vs availability).

Automation

A high degree of automation is required for businesses to sustain a coherent, fair and consistent blockchain and surrounding ecosystem. Existing areas with a high demand for automation include those common to most distributed systems. For example; deployment, elastic topologies, monitoring, recovery from anomalies, testing, continuous integration, and continuous delivery. 

For blockchains, these represent well-established IT engineering practices. Additionally, there is a creative R&D effort to automate the interactions required to handle assets, computational resources and users across a range of new problem spaces (e.g. logistics, digital asset creation and trading).

The trend of social interactions has seen a significant shift towards scripting for transactional operations. This is where smart contracts and constrained virtual machine (VM) interpreters have emerged – an effort pioneered by the Ethereum project

Many blockchain enthusiasts are drawn to the ability to set up asset exchanges, specifying conditions and actions triggered by certain events. Smart contracts find various applications in lotteries, digital asset trading, and derivative trading. However, despite the exciting potential of smart contracts, getting involved in this area requires a significant level of expertise. Only skilled developers who are willing to invest time in learning Domain Specific Languages (DSL) can create and modify these contracts.

The challenge is to respond to safety and security concerns when smart contracts are applied to edge case scenarios that deviate from the ‘happy path’. If badly designed contracts cannot properly roll back or undo a miscarried transaction, their execution might lead to assets being lost or erroneously handed over to unwanted receivers.

Automation and governance

Another area in high need of automation is governance. Any blockchain ecosystem of users and computing resources requires periodic configurations of the parameters to carry on operating coherently and consensually. This results in a complex exercise of tuning for incentives and deterrents to guarantee the fulfilment of ambitious collaborative and decentralised goals. The newly emerging field of ‘blockchain economics’ (combining economics; game theory; social science and other disciplines) remains in its infancy.

The removal of a central ruling authority produces a vacuum that needs to be filled by an adequate decision-making body, which is typically supplied with automation that maintains a combination of static and dynamic configuration settings. Those consensus solutions referred to earlier which use computational resources or social stackable assets to assign the authority, not only to produce blocks but also to steer the variable part of governance, have succeeded in filling the decision-making gap in a fair and automated way. Successively, the exploitation of flaws in the static element of governance has hindered the success of these models. This has contributed to the rise in popularity of curated approaches such as POA or DPOS, which not only bring back centralised control but also reduce the automation of governance.

This a major area of evolution in blockchain where we expect to see major widespread market adoption.

Transparency and trust

For businesses to produce the desired audience engagement for blockchain and eventual mass adoption and success, consensus and governance mechanisms need to operate transparently. Users need to know who has access to what data so that they can decide what can be stored and possibly shared on-chain. These are the contractual terms by which users agree to share their data. As previously discussed, users might be required to exercise the right for their data to be deleted, which typically is a feature delivered via auxiliary, ‘off-chain’ databases. In contrast, only hashed information, effectively devoid of its meaning, is preserved permanently on-chain.

Given the immutable nature of the chain history, it is important to decide upfront what data should be permanently written on-chain and what gets written off-chain. The users should be made aware of what data gets stored on-chain and with whom it could potentially be shared. Changing access to on-chain data or deleting it goes against the fundamentals of immutability and therefore is almost impossible. Getting that decision wrong at the outset can significantly affect the cost and usability (and therefore likely adoption) of the particular blockchain in question.

Besides transparency, trust is another critical feature that users and customers legitimately seek. Trust has to go beyond the scope of the people involved as systems need to be trusted as well. Every static element, such as an encryption algorithm, the dependency on a library, or a fixed configuration, is potentially exposed to vulnerabilities.

Link to the external world

The attractive features that blockchain has brought to the internet market would be limited to handling digital assets unless there was a way to link information to the real world. Embracing blockchain solely within digital boundaries may diminish its appeal, as businesses seek solutions that integrate seamlessly with the analogue realities of our lives.

Technologies used to overcome these limitations include cyber-physical devices such as sensors for input and robotic activators for output, and in most circumstances, people and organisations. As we read through most blockchain white papers, we occasionally come across the notion of the Oracle, which in short, is a way to name an input coming from a trusted external source that could potentially trigger/activate a sequence of transactions in a Smart Contract or which can otherwise be used to validate some information that cannot be validated within the blockchain itself.

Blockchain oracles connecting blockchains to inputs and outputs

Bitcoin and Ethereum, still the two dominant projects in the blockchain space are viewed by many investors as an opportunity to diversify a portfolio or speculate on the value of their respective cryptocurrencies. The same applies to a wide range of other cryptocurrencies except fiat pegged currencies, most notably Tether, where the value is effectively bound to the US dollar. Conversions from one cryptocurrency to another and to/from fiat currencies are normally operated by exchanges on behalf of an investor. These are again peripheral services that serve as a link to the external physical world. For businesses, these exchanges provide crucial services that facilitate investment and trading activities, contributing to the broader ecosystem of blockchain-based assets.

Besides oracles and cyber-physical links, interest is emerging in linking smart contracts together to deliver a comprehensive solution. Contracts could indeed operate in a cross-chain scenario to offer interoperability among a variety of digital assets and protocols. Although attempts to combine different protocols and approaches have emerged, this is still an area where further R&D is necessary to provide enough instruments and guarantees to developers and entrepreneurs. The challenge is to deliver cross-chain functionalities without the support of a central governing agency/body.

To conclude

As we’ve highlighted throughout the series, blockchain provides real transformative potential across varying business industries. For a business to truly leverage this technology, the fundamentals we have highlighted must be understood to navigate the complexities of blockchain adoption successfully. 

If you want to start a conversation with the team, feel free to drop us a line.

The post 7 Key Blockchain Principles for Business appeared first on Erlang Solutions.

by Erlang Solutions Team at May 30, 2024 09:46

Blockchain Tech Deep Dive 2/4 | Myths vs. Realities

This is the second part of our ‘Making Sense of Blockchain’ blog post series – you can read part 1 on ‘6 Blockchain Principles’ here. This article is based on the original post by Dominic Perini here.

Join our FinTech mailing list for more great content and industry and events news, sign up here >>

With so much hype surrounding blockchain, we separate the reality from the myths to ensure delivery of the ROI and competitive advantage that you need.
It’s not our aim here to discuss the data structure of blockchain itself, issues like those of transactions per second (TPS) or questions such as ‘what’s the best Merkle tree solution to adopt?’. Instead, we shall examine the state of maturity of blockchain technology and its alignment with the core principles that underpin a distributed ledger ecosystem.

Blockchain technology aims to embrace the following high-level principles:

7 founding principles of blockchain

  • Immutability 
  • Decentralisation 
  • ‘Workable’ consensus
  • Distribution and resilience
  • Transactional automation (including ‘smart contracts’)
  • Transparency and Trust
  • A link to the external world

Immutability of history

In an ideal world it would be desirable to preserve an accurate historical trace of events, and make sure this trace does not deteriorate over time, whether through natural events, human error or by the intervention of fraudulent actors. Artefacts produced in the analogue world face alterations over time while in the digital world the quantized / binary nature of stored information provides the opportunity for continuous corrections to prevent deterioration that might occur over time.

Writing an immutable blockchain aims to retain a digital history that cannot be altered over time. This is particularly useful when it comes to assessing the ownership or the authenticity of an asset or to validate one or more transactions.

We should note that, on top of the inherent immutability of a well-designed and implemented blockchain, hashing algorithms provide a means to encode the information that gets written in the history so that the capacity to verify a trace/transaction can only be performed by actors possessing sufficient data to compute the one-way cascaded encoding/encryption. This is typically implemented on top of Merkle trees where hashes of concatenated hashes are computed.

Legitimate questions can be raised about the guarantees for indefinitely storing an immutable data structure:

  • If this is an indefinitely growing history, where can it be stored once it grows beyond the capacity of the ledgers?
  • As the history size grows (and/or the computing power needed to validate further transactions increases) this reduces the number of potential participants in the ecosystem, leading to a de facto loss of decentralisation. At what point does this concentration of ‘power’ create concerns?
  • How does verification performance deteriorate as the history grows?
  • How does it deteriorate when a lot of data gets written on it concurrently by users?
  • How long is the segment of data that you replicate on each ledger node?
  • How much network traffic would such replication generate?
  • How much history is needed to be able to compute a new transaction?
  • What compromises need to be made on linearisation of the history, replication of the information, capacity to recover from anomalies and TPS throughput?


Further to the above questions, how many replicas converging to a specific history (i.e. consensus) are needed for it to carry on existing? And in particular:

  • Can a fragmented network carry on writing to their known history?
  • Is an approach designed to ‘heal’ any discrepancies in the immutable history of transactions by rewarding the longest fork, fair and efficient?
  • Are the deterrents strong enough to prevent a group of ledgers forming their own fork that eventually reaches wider adoption?


Furthermore, a new requirement to comply with the General Data Protection Regulations (GDPR) in Europe and ‘the right to be forgotten’ introduces new challenges to the perspective of keeping permanent and immutable traces indefinitely. This is important because fines for breaches of GDPR are potentially very severe. The solutions introduced so far effectively aim at anonymising the information that enters the immutable on-chain storage process, while sensitive information is stored separately in support databases where this information can be deleted if required. None of these approaches has yet been tested by the courts. 

The challenging aspect here is to decide upfront what is considered sensitive and what can safely be placed on the immutable history. A wrong choice can backfire at a later stage in the event that any involved actor manages to extract or trace sensitive information through the immutable history.

Immutability represents one of the fundamental principles that motivate the research into blockchain technology, both private and public. The solutions explored so far have managed to provide a satisfactory response to the market needs via the introduction of history linearisation techniques, one-way hashing encryptions, merkle trees and off-chain storage, although the linearity of the immutable history comes at a cost (notably transaction volume).

Decentralisation of control

One of the reactions following the 2008 global financial crisis was against over-centralisation. This led to the exploration of various decentralised mechanisms. The proposition that individuals would like to enjoy the freedom to be independent of a central authority gained in popularity. Self-determination, democratic fairness and heterogeneity as a form of wealth are among the dominant values broadly recognised in Western (and, increasingly, non-Western) society. These values added weight to the movement that introducing decentralisation in a system is positive.

With full decentralisation, there is no central authority to resolve potential transactional issues for us. Traditional, centralised systems have well developed anti-fraud and asset recovery mechanisms which people have become used to. Using new, decentralised technology places a far greater responsibility on the user if they are to receive all of the benefits of the technology, forcing them to take additional precautions when it comes to handling and storing their digital assets.

There’s no point having an ultra-secure blockchain if one then hands over one’s wallet private key to an intermediary whose security is lax: it’s like having the most secure safe in the world then writing the combination on a whiteboard in the same room.

Is the increased level of personal responsibility that goes with the proper implementation of a secure blockchain a price that users are willing to pay? Or, will they trade off some security in exchange for ease of use (and, by definition, more centralisation)? 

Consensus

The consistent push towards decentralised forms of control and responsibility has brought to light the fundamental requirement to validate transactions without a central authority; known as the ‘consensus’ problem. Several approaches have grown out of the blockchain industry, some competing and some complementary.

There has also been a significant focus on the concept of governance within a blockchain ecosystem. This concerns the need to regulate the rates at which new blocks are added to the chain and the associated rewards for miners (in the case of blockchains using proof of work (POW) consensus methodologies). More generally, it is important to create incentives and deterrent mechanisms whereby interested actors contribute positively to the healthy continuation of chain growth.

Besides serving as an economic deterrent against denial of service and spam attacks, POW approaches are amongst the first attempts to automatically work out, via the use of computational power, which ledgers/actors have the authority to create/mine new blocks. Other similar approaches (proof of space, proof of bandwidth etc) followed, however, they all suffer from exposure to deviations from the intended fair distribution of control. Wealthy participants can, in fact, exploit these approaches to gain an advantage via purchasing high performance (CPU / memory / network bandwidth) dedicated hardware in large quantities and operating it in jurisdictions where electricity is relatively cheap. This results in overtaking the competition to obtain the reward, and the authority to mine new blocks, which has the inherent effect of centralising the control. Also, the huge energy consumption that comes with the inefficient nature of the competitive race to mine new blocks in POW consensus mechanisms has raised concerns about its environmental impact and economic sustainability.

Proof of Stake (POS) and Proof of Importance (POI) are among the ideas introduced to drive consensus via the use of more social parameters, rather than computing resources. These two approaches link the authority to the accumulated digital asset/currency wealth or the measured productivity of the involved participants. Implementing POS and POI mechanisms, whilst guarding against the concentration of power/wealth, poses not insubstantial challenges for their architects and developers.

More recently, semi-automatic approaches, driven by a human-curated group of ledgers, are putting in place solutions to overcome the limitations and arguable fairness of the above strategies. The Delegated Proof of Stake (DPOS) and Proof of Authority (POA) methods promise higher throughput and lower energy consumption, while the human element can ensure a more adaptive and flexible response to potential deviations caused by malicious actors attempting to exploit a vulnerability in the system.

Distribution and resilience

Apart from a decentralising authority, control and governance, blockchain solutions typically embrace a distributed peer to peer (P2P) design paradigm. This preference is motivated by the inherent resilience and flexibility that these types of networks have introduced and demonstrated, particularly in the context of file and data sharing.

A centralised network, typical of mainframes and centralised services is clearly exposed to a ‘single point of failure’ vulnerability as the operations are always routed towards a central node. In the event that the central node breaks down or is congested, all the other nodes will be affected by disruptions.

Decentralised and distributed networks attempt to reduce the detrimental effects that issues occurring on a node might trigger on other nodes. In a decentralised network, the failure of a node can still affect several neighbouring nodes that rely on it to carry out their operations. In a distributed network the idea is that the failure of a single node should not impact significantly any other node. In fact, even when one preferential/optimal route in the network becomes congested or breaks down entirely, a message can reach the destination via an alternative route. This greatly increases the chance of keeping a service available in the event of failure or malicious attacks such as a denial of service (DOS) attack.

Blockchain networks where a distributed topology is combined with a high redundancy of ledgers backing a history have occasionally been declared ‘unhackable’ by enthusiasts or, as some more prudent debaters say, ‘difficult to hack’. There is truth in this, especially when it comes to very large networks such as that of Bitcoin. In such a highly distributed network, the resources needed to generate a significant disruption are very high, which not only delivers on the resilience requirement but also works as a deterrent against malicious attacks (principally because the cost of conducting a successful malicious attack becomes prohibitive).

Although a distributed topology can provide an effective response to failures or traffic spikes, you need to be aware that delivering resilience against prolonged over-capacity demands or malicious attacks requires adequate adapting mechanisms. While the Bitcoin network is well positioned, as it currently benefits from a high capacity condition (due to the historical high incentive to purchase hardware by third-party miners), this is not the case for other emerging networks as they grow in popularity. This is where novel instruments, capable of delivering preemptive adaptation combined with back pressure throttling applied to the P2P level, can be of great value.

Distributed systems are not new and, whilst they provide highly robust solutions to many enterprise and governmental problems, they are subject to the laws of physics and require their architects to consider the trade-offs that need to be made in their design and implementation (e.g. consistency vs availability).

Automation

In order to sustain a coherent, fair and consistent blockchain and surrounding ecosystem, a high degree of automation is required. Existing areas with a high demand for automation include those common to most distributed systems. For instance; deployment, elastic topologies, monitoring, recovery from anomalies, testing, continuous integration, and continuous delivery. In the context of blockchains, these represent well-established IT engineering practices. Additionally, there is a creative R&D effort to automate the interactions required to handle assets, computational resources and users across a range of new problem spaces (e.g. logistics, digital asset creation and trading).

The trend of social interactions has seen a significant shift towards scripting for transactional operations. This is where smart contracts and constrained virtual machine (VM) interpreters have emerged – an effort pioneered by the Ethereum project.

The ability to define how to operate an asset exchange, by which conditions and actioned following which triggers, has attracted many blockchain enthusiasts. Some of the most common applications of smart contracts involve lotteries, trade of digital assets and derivative trading. While there is clearly exciting potential unleashed by the introduction of smart contracts, it is also true that it is still an area with a high entry barrier. Only skilled developers that are willing to invest time in learning Domain Specific Languages (DSL) have access to the actual creation and modification of these contracts.

The challenge is to respond to safety and security concerns when smart contracts are applied to edge case scenarios that deviate from the ‘happy path’. If badly-designed contracts cannot properly rollback or undo a miscarried transaction, their execution might lead to assets being lost or erroneously handed over to unwanted receivers.

Another area in high need for automation is governance. Any blockchain ecosystem of users and computing resources requires periodic configurations of the parameters to carry on operating coherently and consensually. This results in a complex exercise of tuning for incentives and deterrents to guarantee the fulfilment of ambitious collaborative and decentralised goals. The newly emerging field of ‘blockchain economics’ (combining economics; game theory; social science and other disciplines) remains in its infancy.

Clearly, the removal of a central ruling authority produces a vacuum that needs to be filled by an adequate decision-making body, which is typically supplied with automation that maintains a combination of static and dynamic configuration settings. Those consensus solutions referred to earlier which use computational resources or social stackable assets to assign the authority, not only to produce blocks but also to steer the variable part of governance, have succeeded in filling the decision making gap in a fair and automated way. Successively, the exploitation of flaws in the static element of governance has hindered the success of these models. This has contributed to the rise in popularity of curated approaches such as POA or DPOS, which not only bring back a centralised control but also reduce the automation of governance.

We expect this to be one of the major areas where blockchain has to evolve in order to succeed in getting widespread market adoption.

Transparency and trust

In order to produce the desired audience engagement for blockchain and eventual mass adoption and success, consensus and governance mechanisms need to operate transparently. Users need to know who has access to what data so that they can decide what can be stored and possibly shared on-chain. These are the contractual terms by which users agree to share their data. As previously discussed users might be required to exercise the right for their data to be deleted, which typically is a feature delivered via auxiliary, ‘off-chain’ databases. In contrast, only hashed information, effectively devoid of its meaning, is preserved permanently on-chain.

Given the immutable nature of the chain history, it is important to decide upfront what data should be permanently written on-chain and what gets written off-chain. The users should be made aware of what data gets stored on-chain and with whom it could potentially be shared. Changing access to on-chain data or deleting it goes against the fundamentals of immutability and therefore is almost impossible. Getting that decision wrong at the outset can significantly affect the cost and usability (and therefore likely adoption) of the particular blockchain in question.

Besides transparency, trust is another critical feature that users legitimately seek. Trust has to go beyond the scope of the people involved as systems need to be trusted as well. Every static element, such as an encryption algorithm, the dependency on a library, or a fixed configuration, is potentially exposed to vulnerabilities.

Link to the external world

The attractive features that blockchain has brought to the internet market would be limited to handling digital assets unless there was a way to link information to the real world. It is safe to say that there would be less interest if we were to accept that a blockchain can only operate under the restrictive boundaries of the digital world, without connecting to the analog real world in which we live.

Technologies used to overcome these limitations including cyber-physical devices such as sensors for input and robotic activators for output, and in most circumstances, people and organisations. As we read through most blockchain white papers we occasionally come across the notion of the Oracle, which in short, is a way to name an input coming from a trusted external source that could potentially trigger/activate a sequence of transactions in a Smart Contract or which can otherwise be used to validate some information that cannot be validated within the blockchain itself.

Bitcoin and Ethereum, still the two dominant projects in the blockchain space are viewed by many investors as an opportunity to diversify a portfolio or speculate on the value of their respective cryptocurrency. The same applies to a wide range of other cryptocurrencies with the exception of fiat pegged currencies, most notably Tether, where the value is effectively bound to the US dollar. Conversions from one cryptocurrency to another and to/from fiat currencies are normally operated by exchanges on behalf of an investor. These are again peripheral services that serve as a link to the external physical world.

Besides oracles and cyber-physical links, interest is emerging in linking smart contracts together to deliver a comprehensive solution. Contracts could indeed operate in a cross-chain scenario to offer interoperability among a variety of digital assets and protocols. Although attempts to combine different protocols and approaches have emerged, this is still an area where further R&D is necessary in order to provide enough instruments and guarantees to developers and entrepreneurs. The challenge is to deliver cross-chain functionalities without the support of a central governing agency/body.

* originally published 2018 by Dominic Perini

For any business size in any industry, we’re ready to investigate, build and deploy your blockchain-based project on time and to budget.

Let’s talk

If you want to start a conversation about engaging us for your fintech project or talk about partnering and collaboration opportunities, please send our Fintech Lead, Michael Jaiyeola, an email or connect with him via Linkedin.

The post Blockchain Tech Deep Dive 2/4 | Myths vs. Realities appeared first on Erlang Solutions.

by Erlang Solutions Team at May 30, 2024 09:06

May 28, 2024

The XMPP Standards Foundation

Scaling up with MongooseIM 6.2.1

MongooseIM is a scalable, extensible and efficient real-time messaging server that allows organisations to build cost-effective communication solutions. Built on the XMPP server, MongooseIM is specifically designed for businesses facing the challenge of large deployments, where real-time communication and user experience are critical. The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend, which simplifies and enhances its scalability.

It is difficult to predict how much traffic your XMPP server will need to handle. This is why MongooseIM offers several means of scalability. Firstly, even one machine can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. As a result, it is recommended to have a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol. Having at least three nodes in the cluster allows you to perform a rolling upgrade, where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance. During such an upgrade, you can increase hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier, because you only need to add new nodes to the already deployed cluster.

Mnesia

Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey, because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

  1. Consistency issues, which tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
  2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
  3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data shared between the cluster nodes. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

Introducing CETS

Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to keep all your persistent data.

Getting rid of Mnesia removes a lot of important obstacles. For example, if you are using Kubernetes, MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up horizontal autoscaling for your installation.

See it in action

If you want quickly set up a working autoscaled MIM cluster using Helm, see the detailed blog post. For more information, consult the documentation, GitHub or the product page. You can try MongooseIM online as well.

Read about Erlang Solution as sponsor of the XSF.

May 28, 2024 00:00

May 26, 2024

Ignite Realtime Blog

New Openfire plugin: XMPP Web!

We are excited to be able to announce the immediate availability of a new plugin for Openfire: XMPP Web!

This new plugin for the real-time communications server provided by the Ignite Realtime community allows you to install the third-party webclient named ‘XMPP Web’ in mere seconds! By installing this new plugin, the web client is immediately ready for use.

This new plugin compliments others that similarly allow to deploy a web client with great ease, like Candy, inVerse and JSXC! With the addition of XMPP Web, the selection of easy-to-install clients for your users to use becomes even larger!

The XMPP Web plugin for Openfire is based on release 0.10.2 of the upstream project, which currently is the latest release. It will automatically become available for installation in the admin console of your Openfire server in the next few days. Alternatively, you can download it immediately from its archive page.

Do you think this is a good addition to the suite of plugins? Do you have any questions or concerns? Do you just want to say hi? Please stop by our community forum or our live groupchat!

For other release announcements and news follow us on Mastodon or X

3 posts - 2 participants

Read full topic

by guus at May 26, 2024 17:50

May 23, 2024

Erlang Solutions

Balancing Innovation and Technical Debt

Let’s explore the delicate balance between innovation and technical debt. 

We will look into actionable strategies for managing debt effectively while optimising our infrastructure for resilience and agility.

Balancing acts and trade-offs

I was having this conversation with a close acquaintance not long ago. He’s setting up his new startup, filling a market gap he’s found, rushed before the gap closes in. It’s a common starting point for many entrepreneurs. You have an idea you need to implement, and until it is implemented and (hopefully) sold, there is no revenue, all while someone else can close the gap before you do. Time-to-market is key.

While there’s no revenue, you acquire debt. But while reasonably careful to keep it under control, you pay the Financial Debt off with a different kind of debt: Technical Debt. You choose to make a trade-off here, a trade-off that all too often is taken without awareness. This trade-off between debts requires careful thinking too, just as much as financial debt is an obvious risk, so is a technical one.

Let’s define these debts. Technical is the accumulated cost of shortcuts or deferred maintenance in software development and IT infrastructure. Financial is the borrowing of funds to finance business operations or investments. They share a common thread: the trade-off between short-term gains and long-term sustainability.

Just like financial debt can provide immediate capital for growth, it can also drag the business into financial inflexibility and burdensome interest rates. Technical debt expedites product development or reduces time-to-market, at the expense of increased maintenance, reduced scalability, and decreased agility. It is an often overlooked aspect of a technological investment, whose prompt care can have a huge impact on the lifespan of the business. As an enterprise must manage its financial leverage to maintain solvency and liquidity, it must also manage its technical debt to ensure the reliability, scalability, and maintainability of their systems and software.

The Economics of Technical Debt

Consider the example of a rapidly growing e-commerce platform: appeal attracts demand, demand requires resources, and resources mean increased vulnerability: the increasing user data and resources attract threats, aiming to disrupt services, steal sensitive data, or cause reputational harm. In this environment, the platform’s success is determined by its ability to strike a delicate balance between serving legitimate customers and thwarting malicious actors, where both play ever-increasing proportions.

Early on, the platform prioritised rapid development and deployment of new features; however, in their haste to innovate, the technical team accumulated debt by taking shortcuts and deferring critical maintenance tasks. What results from this is a platform that is increasingly fragile and inflexible, leaving it vulnerable to disruptive attacks and more agile competitors. Meanwhile, reasonably, the platform’s financial team kept allocating capital to funding marketing campaigns, product launches, and strategic acquisitions, under pressure to maximise profitability and shareholder value; however, they neglected to allocate sufficient resources towards cybersecurity initiatives, viewing them as discretionary expenses rather than critical investments in risk mitigation and resilience.

Technical currencies

If we’re talking about debt, and drawing a parallel with financial terms, let’s complete the parallel. By establishing the concept of currencies, we can build quantifiable metrics of value that reflect the health and resilience of digital assets. Code coverage, for instance, measures the proportion of codebase exercised by automated tests, providing insights into the potential presence of untested or under-tested code paths. In this line, tests and documentation are the two assets that pay the highest technical debt. 

See for example how coverage for MongooseIM has been continuously trending higher.

Similarly, Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the process of integrating code changes, running automated tests, verifying engineering work, and deploying applications to diverse environments, enabling teams to deliver software updates frequently and with confidence. By streamlining the development workflow and reducing manual intervention, CI/CD pipelines enhance productivity, accelerate time-to-market, and minimise the risk of human error. Humans have bad days and sleepless nights, well-developed automation doesn’t.

Additionally, valuations on code quality that are diligently tracked on the organisation’s ticketing system provide valuable insights into the evolution of software assets and the effectiveness of ongoing efforts to address technical debt and improve code maintainability. These valuations enable organisations to prioritise repayment efforts, allocating resources effectively.

Repaying Technical Debt

The longer any debt remains unpaid, the greater its impact on the organisation — (technical) debt accrues “interest” over time. But, much like in finances, a debt is paid with available capital, and choosing a payment strategy can make a difference in whether capital is wasted or successfully (re)invested:

  1. Priorities and Plans: Identify and prioritise areas of technical debt based on their impact on the system’s performance, stability, and maintainability. Develop a plan that outlines the steps needed to address each aspect of technical debt systematically.
  2. Refactoring: Allocate time and resources to refactor code and systems to improve their structure, readability, and maintainability. Break down large, complex components into smaller, more manageable units, and eliminate duplicate or unnecessary code. See for example how we battled technical debt in MongooseIM.
  3. Automated Testing: Invest in automated testing frameworks and practices to increase test coverage and identify regression issues early in the development process. Implement continuous integration and continuous deployment (CI/CD) pipelines to automate the testing and deployment of code changes. Establishing this pipeline is always the first step into any new project we join and we’ve become familiar with diverse CI technologies like GitHub Actions, CircleCI, GitlabCI, or Jenkins.
  4. Documentation: Enhance documentation efforts to improve understanding and reduce ambiguity in the codebase. Document design decisions, architectural patterns, and coding conventions to facilitate collaboration and knowledge sharing among team members. Choose technologies that facilitate and enhance documentation work.

Repayment assets

Repayment assets are resources or strategies that can be leveraged to make debt repayment financially viable. Here are some key repayment assets to consider:

  1. Training and Education: Provide training and education opportunities for developers to enhance their skills and knowledge in areas such as software design principles, coding best practices, and emerging technologies. Encourage continuous learning and professional development to empower developers to make informed decisions and implement effective solutions.
  2. Technical Debt Reviews: Conduct regular technical debt reviews to assess the current state of the codebase, identify areas of concern, and track progress in addressing technical debt over time. Use metrics and KPIs to measure the impact of technical debt reduction efforts and inform decision-making.
  3. Collaboration and Communication: Foster a culture of collaboration and communication among development teams, stakeholders, and other relevant parties. Encourage open discussions about technical debt, its implications, and potential strategies for repayment, and involve stakeholders in decision-making processes.
  4. Incremental Improvement: Break down technical debt repayment efforts into smaller, manageable tasks and tackle them incrementally. Focus on making gradual improvements over time rather than attempting to address all technical debt issues at once, prioritising high-impact and low-effort tasks to maximise efficiency and effectiveness.

Don’t acquire more debt than you have to

While debt is a quintessential aspect of entrepreneurship, acquiring it unwisely is obviously shooting in one’s foot. You’ll have to make many decisions and choose over many trade-offs, so you better be well-informed before putting your finger on the red buttons.

Your service will require infrastructure

Whether you choose one vendor over another or decide to go self-hosted, use containerised technologies, so that future changes to better infrastructures are possible. Containers also provide a consistent environment for development, testing and production. Choose technologies that are good citizens in containerised environments.

Your service will require hardware resources

Whether you choose one or another hardware architecture or any amount of memory, use runtimes that can efficiently use and adapt to any given hardware, so that future changes to better hardware are fruitful. For example Erlang’s concurrency model is famous for automatically taking advantage of any number of cores, and with technologies like Elixir’s Nx you can take advantage of esoteric GPUs and TPUs hardware for your machine learning tasks.

Your service will require agility

The market will push your offerings to its limit, in a never-ending stream of requests for new functionality and changes to your service. Your code will need to change, and respond to changes. From Elixir‘s metaprogramming and language extensibility to Gleam‘s strong type-safety, prioritise tools that likewise aid your developers to change things safely and powerfully.

Your service will require resiliency

There are two philosophies in the culture of error handling: either it is mathematically proven that errors cannot happen – Haskell’s approach – or it is assumed they can’t always be avoided and we need to learn to handle them – Erlang’s approach. Wise technologies take one starting point as an a-priori foundation of the technology and, a-posteriori, deal with the other end. Choose wisely your point on the scale, and be wary of technologies that don’t take a safe stance. Errors can happen: electricity goes down, cables are cut, and attackers attack. Programmers have bad sleepless nights or get sick. Take a stance, before errors bite your service.

Your service will require availability

No fancy unique idea will sell if it can’t be bought, and no service will be used if it is not there to begin with. Unavailability takes an exponential toll on your revenue, so prioritise availability. Choose technologies that can handle not just failure, but even upgrades (!), without downtime. And to have real availability, you always need at least two computers, in case one dies: choose technologies that make many independent computers cooperate easily and can take over another’s work transparently.

A Case Study: A Balancing Act in Traffic Management

A chat system, like many web services, handles a countably infinite number of independent users. It is a heavily network-based application that needs to respond to requests that are independent of each other in a timely and fair manner. It is an embarrassingly parallel problem, messages can be processed independently of each other, but it is also a challenge of soft real-time properties, where messages should be processed sufficiently soon for a human to have a good user experience. It also faces the challenge of bad actors, which makes requests blacklisting and throttling necessary.

MongooseIM is one such system. It is written in Erlang, and in its architecture, every user is handled by one actor.

It is containerised, and easily uses all available resources efficiently and smoothly, adapting to any change of hardware, from small embedded systems to massive mainframes. Its architecture uses the Publish-Subscribe programming pattern heavily, and because Erlang is a functional language, functions are first-class citizens, and therefore functions are installed to handle all sorts of events extensively because we never know what new functionality we will need to implement in the future.

One important event is a new session starting: mechanisms for blacklisting are plenty, whether they’re based on specific identifiers, IP regions, or even modern AI-based behaviour analysis, we can’t predict the future,  so we simply publish the “session opened” event and leave for future us to install the right handler when is needed.

Another important event is that of a simple message being sent. What if bad actors have successfully opened sessions and start flooding the system, consuming the CPU and Database unnecessarily? Again, changing requirements might dictate the system is to handle some users with preferential treatment. One default option is to slow down all message processing within some reasonable rate, for which we use a traffic shaping mechanism called the Token Bucket algorithm, implemented in our library Opuntia – named that way because if you touch it too fast, it stings you.

You can read more about how scalable MongooseIM is in this article, where we pushed it to its limit. And while we continuously load-test our server, we haven’t done another round of limit-pushing since then, stay tuned for a future blog when we do just this!

Lessons Learned

Technical Debt has an inherent value akin to Financial Debt. Choosing the right tool for the job means acquiring the right Technical Debt when needed – leveraging strategies, partnerships, and solutions, that prioritise resilience, agility, and long-term sustainability.

The post Balancing Innovation and Technical Debt appeared first on Erlang Solutions.

by Nelson Vides at May 23, 2024 10:58

May 21, 2024

JMP

Newsletter: SMS Routes, RCS, and more!

Hi everyone!

Welcome to the latest edition of your pseudo-monthly JMP update!

In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

SMS Censorship, New Routes

We have written before about the increasing levels of censorship across the SMS network. When we published that article, we had no idea just how bad things were about to get. Our main SMS route decided at the beginning of April to begin censoring all messages both ways containing many common profanities. There was quite some back and forth about this, but in the end this carrier has declared that the SMS network is not meant for person-to-person communication and they don’t believe in allowing any profanity to cross their network.

This obviously caused us to dramatically step up the priority of integration with other SMS routes, work which is now nearing completion. We expect very soon to be offering long-term customers with new options which will not only dramatically reduce the censorship issue, but also in some cases remove the max-10 group text limit, dramatically improve acceptance by online services, and more.

RCS

We often receive requests asking when JMP will add support for RCS, to complement our existing SMS and MMS offerings. We are happy to announce that we have RCS access in internal testing now. The currently-possible access is better suited to business use than personal use, though a mix of both is certainly possible. We are assured that better access is coming later in the year, and will keep you all posted on how that progresses. For now if you are interested in testing this, especially if you are a business user, please do let us know and we’ll let you know when we are ready to start some testing.

One thing to note is that “RCS” means different things to different people. The main RCS features we currently have access to are typing notifications, displayed/read notifications, and higher-quality media transmission.

Cheogram Android

Cheogram Android 2.15.3-1 was released this month, with bug fixes and new features including:

  • Major visual refresh, including optional Material You
  • Better audio routing for calls
  • More customizable custom colour theme
  • Conversation read-status sync with other supporting apps
  • Don’t compress animated images
  • Do not default to the network country when there is no SIM (for phone number format)
  • Delayed-send messages
  • Message loading performance improvements

New GeoApp Experiment

We love OpenStreetMap, but some of us have found existing geocoder/search options lacking when it comes to searching by business name, street address, etc. As an experimental way to temporarily bridge that gap, we have produced a prototype Android app (source code) that searches Google Maps and allows you to open search results in any mapping app you have installed. If people like this, we may also extend it with a server-side component that hides all PII, including IP addresses, from Google, for a small monthly fee. For now, the prototype is free to test and will install as “Maps+” in your launcher until we come up with a better name (suggestions welcome!).

To learn what’s happening with JMP between newsletters, here are some ways you can find out:

Thanks for reading and have a wonderful rest of your week!

by Stephen Paul Weber at May 21, 2024 19:22

May 17, 2024

Erlang Solutions

Instant Scalability with MongooseIM and CETS

The main feature of the recently released MongooseIM 6.2.1 is the improved CETS in-memory storage backend which makes it much easier to scale up.

It is difficult to predict how much traffic your XMPP server will need to handle. Are you going to have thousands or millions of connected users? Will you need to deliver hundreds of millions of messages per minute? Answering such questions is almost impossible if you are just starting up. This is why MongooseIM offers several means of scalability.

Clustering

Even one machine running MongooseIM can handle millions of connected users, provided that it is powerful enough. However, one machine is not recommended for fault tolerance reasons, because every time it needs to be shut down for maintenance, upgrade or because of any issues, your service would experience downtime. This is why we recommend using a cluster of connected MongooseIM nodes, which communicate efficiently over the Erlang Distribution protocol. Having at least three nodes in the cluster allows you to perform a rolling upgrade, where each node is stopped, upgraded, and then restarted before moving to the next node, maintaining fault tolerance and eliminating unnecessary downtime. During such an upgrade procedure, you can increase the hardware capabilities of each node, scaling the system vertically. Horizontal scaling is even easier because you only need to add new nodes to the already deployed cluster.

Mnesia

Mnesia is a built-in Erlang Database that allows sharing both persistent and in-memory data between clustered nodes. It is a great option at the start of your journey because it resides on the local disk of each cluster node and does not need to be started as a separate component. However, when you are heading towards a real application, a few issues with Mnesia become apparent:

  1. Consistency issues tend to show up quite frequently when scaling or upgrading the cluster. Resolving them requires Erlang knowledge, which should not be the case for a general-purpose XMPP server.
  2. A persistent volume for schema and data is required on each node. Such volumes can be difficult and costly to maintain, seriously impacting the possibilities for automatic scaling.
  3. Unlike relational databases, Mnesia is not designed for storing huge amounts of data, which can lead to performance issues.

After trying to mitigate such issues for a couple of years, we have concluded that it is best not to use Mnesia at all. First and foremost, it is highly recommended not to store any persistent data in Mnesia, and MongooseIM can be configured to store such data in a relational database instead. However, up to version 6.1.0, MongooseIM would still need Mnesia to store in-memory data. For example, a shared table of user sessions is necessary for message routing between users connected to different cluster nodes. The problem is that even without persistent tables, Mnesia still needs to keep its schema on disk, and the first two issues listed above would not be eliminated.

Introducing CETS

Introduced in version 6.2.0 and further refined in version 6.2.1, CETS (Cluster ETS) is a lightweight replication layer for in-memory ETS tables that requires no persistent data. Instead, it relies on a discovery mechanism to connect and synchronise with other cluster nodes. When starting up, each node registers itself in a relational database, which you should use anyway to store all your persistent data. Getting rid of Mnesia removes the last obstacle on your way to easy and simple management of MongooseIM. For example, if you are using Kubernetes, MongooseIM no longer requires any persistent volume claims (PVC’s), which could be costly, can get out of sync, and require additional management. Furthermore, with CETS you can easily set up automatic scaling of your installation.

Installing with Helm

As an example, let’s quickly set up a cluster of three MongooseIM nodes. You will need to have Helm and Kubernetes installed. The examples were tested with Docker Desktop, but they should work with any Kubernetes setup. As the first step, let’s install and initialise a PostgreSQL database with Helm:

$ curl -O https://raw.githubusercontent.com/esl/MongooseIM/6.2.1/priv/pg.sql
$ helm install db oci://registry-1.docker.io/bitnamicharts/postgresql \
   --set auth.database=mongooseim --set auth.username=mongooseim --set auth.password=mongooseim_secret \
   --set-file 'primary.initdb.scripts.pg\.sql'=pg.sql

It is useful to monitor all Kubernetes resources in another shell window:

$ watch kubectl get pod,sts,pvc,pv,svc,hpa

As soon as pod/db-postgresql-0 is shown as ready, you can check that the DB is running:

$ kubectl exec -it db-postgresql-0 -- \
  env PGPASSWORD=mongooseim_secret psql -U mongooseim -c 'SELECT * from users'

As a result, you should get an empty list of MongooseIM users. Next, let’s create a three-node MongooseIM cluster using the Helm Chart:

$ helm repo add mongoose https://esl.github.io/MongooseHelm/
$ helm install mim mongoose/mongooseim --set replicaCount=3 --set volatileDatabase=cets \
   --set persistentDatabase=rdbms --set rdbms.tls.required=false --set rdbms.host=db-postgresql \
   --set resources.requests.cpu=200m

By setting persistentDatabase to RDBMS and volatileDatabase to CETS, we are eliminating the need for Mnesia, so no PVC’s are created. To connect to PostgreSQL, we specify db-postgresql as the database host. The requested CPU resources are 0.2 of a core per pod, and they will be useful for autoscaling. You can monitor the shell window, where watch kubectl … is running, to make sure that all MongooseIM nodes are ready. It is useful to verify logs as well, e.g. kubectl logs mongooseim-0 should display logs from the first node. To see how easy it is to scale up horizontally, let’s increase the number of MongooseIM nodes (which correspond to Kubernetes pods) from 3 to 6:

$ kubectl scale --replicas=6 sts/mongooseim

You can use kubectl logs -f mongooseim-0 to see the log messages about each newly added node of the CETS cluster. With helm upgrade, you can do rolling upgrades and scaling as well. The main difference is that the changes done with helm are permanent.

Autoscaling

Should you need automatic scaling, you can set up the Horizontal Pod Autoscaler. Please ensure that you have the Metrics Server installed. There are separate instructions to install it in Docker Desktop. We have already set the requested CPU resources to 0.2 of a core per pod, so let’s start the autoscaler now:

$ kubectl autoscale sts mongooseim --cpu-percent=50 --min=1 --max=8

It is going to keep the CPU usage at 0.1 (which is 50% of 0.2) of a core per pod. The threshold is so low to be able to easily trigger scaling up, and in any real application, it should be much higher. You should see the cluster getting scaled down until it has just one node because there is no CPU load yet. See the reported targets in the window, where you have the watch kubectl … command running. To trigger scaling up, we need to put some load on the server. We could just fire up random HTTP requests, but let’s instead use the opportunity to explore MongooseIM CLI and GraphQL API. Firstly, create a new user on the first node with the CLI:

$ kubectl exec -it mongooseim-0 -- \
  mongooseimctl account registerUser --domain localhost --username alice --password secret

Next, you can send XMPP messages in a loop with the GraphQL Client API:

$ LB_HOST=$(kubectl get svc mongooseim-lb \
  --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ BASIC_AUTH=$(echo -n 'alice@localhost:secret' | base64)
$ while true; \
  do curl --get -N -H "Authorization:Basic $BASIC_AUTH" \
    -H "Content-Type: application/json" --data-urlencode \
    'query=mutation {stanza {sendMessage(to: "alice@localhost", body: "Hi") {id}}}' \
    http://$LB_HOST:5561/api/graphql; \
  done

You should observe new pods being launched as the load increases. If there is not enough load, run the snippet in a few separate shell windows. Stopping the script should bring the cluster size back down.

Summary

Thanks to CETS and the Helm Chart, MongooseIM 6.2.1 can be easily installed, maintained and scaled in a cloud environment. What we have shown here are the first steps, and there is much more to explore. To learn more, you can read the documentation for MongooseIM or check out the live demo at trymongoose.im. Should you have any questions, or if you would like us to customise, configure, deploy or maintain MongooseIM for you, feel free to contact us.

The post Instant Scalability with MongooseIM and CETS appeared first on Erlang Solutions.

by Pawel Chrzaszcz at May 17, 2024 10:22

May 14, 2024

Erlang Solutions

The Golden Age of Data Transformation in Healthcare

Data is the lifeline of the healthcare industry and we are in its golden age. Staggering amounts are generated daily and precision is key to ensure that every scan, test, prescription, and diagnosis produces data that leads to improved patient outcomes and quality future care.

But what happens when the liquid gold that is reliable data can no longer be accessed? 

Trust and confidence hinge on the access of patient information to provide a holistic view of patient history and analytics to gain actionable insights. 

Any disruption to this access could compromise a patient’s health, have a negative impact on organisational reputation with potential financial consequences.

Let’s further examine data’s profound impact on healthcare, emphasising its indispensable role in enhancing clinical practices, operational efficiency and fostering patient welfare.

Accessing data

Effective data doesn’t just sit in silos. It is all deeply interconnected to provide a value far beyond a single use case. We’re talking about a system of multiple processes relying on data to be accessed across medical records, diagnostics, devices, medical histories and more.

This is where interoperability comes in. When technologies are interoperable, they become a single, cohesive unit, designed for seamless integration between internal and external systems.

The need for interoperability

The need for interoperability cannot be overstated. 

Data generated in healthcare delivery has been increasing by 47% yearly. Interoperability allows for various stakeholders within the healthcare network to share access. Think of your pharmacies, hospitals, clinics, and insurers, who use that data to access:

  • Billing information
  • Patient data and medical records
  • Wider population data

While most healthcare providers can agree that they need to adopt interoperability to improve their data quality, it is reported that less than 40% of providers (US) have done so well enough to be able to share data across other organisations. There is a clear discrepancy between the increase in healthcare data and the success of system integration. 

Interoperability holds immense promise for the world of healthcare, but there are immediate challenges to be addressed:

  • Too much data- Preventing overflowing data is not an easy task. Interoperability deals with the influx of EHR (Electronic health network) and EMR (Electronic medical records) information. It also manages data from IoT sources, internal administrative systems and more. Failing to handle them can prove disruptive to overall systems.
  • Lack of resources- Oftentimes, areas of the healthcare industries often lack financial resources to make the changes required. There are initial investments required to make systems interoperable, but it allows for much larger long-term savings.
  • Questionable data exchange practices- Interoperability has been used to simply line the pockets of providers. Known as information blocking, fees are now being imposed to provide digital access to healthcare data as new revenue streams are identified

We’ll revisit the current state of data exchange practices in more detail. 

But first, let’s explore how IoT (Internet of Things) can act as a solution to the aforementioned problems, leveraging technologies to drive success.

Exploring IoT

While the healthcare industry struggles with interoperability, it is very well-known for its agility in adopting new technologies. It continuously innovates through the vast landscape known as the Internet of Things (IoT). 

IoT allows for new, innovative applications and services that have the potential to completely transform the face of healthcare. 

There are some majorly compelling use cases for IoT in healthcare, and a lot of its benefits are tangible. Some increasingly popular services include:

  • Telemedicine
  • Remote patient monitoring
  • Smart hospitals
  • Asset tracking and management

A popular example of healthcare using IoT for asset tracking and asset management is from HCA Healthcare. Operating in over 2,000 healthcare facilities across the US, HCA has implemented RFID tags designed to track medical equipment and supplies, all to enhance asset tracking, management and interoperability with other healthcare information.

So what’s the issue? Let’s return to the point about data exchange practices. There is an argument for the restrictive nature of this operating system. Data collected through RFID tags is not easily accessible or shareable with other providers or systems, which could hinder the exchange of information, leading to data blocking. 

There is also an issue of cost. Initial investment aside, hospitals will be forced to upgrade their existing devices to allow the data from the devices to be sent automatically, potentially costing millions for an establishment. Consider the financial impact that could have on a single hospital ward, let alone an entire annual hospital budget. All of these considerations could impact HCA’s ability to fully leverage RFID technology and overcome potential data-blocking issues.

Data Solutions

When exploring solutions to potential data blocking, it’s worth considering systems that allow facilities to make the most of their existing systems without the need to replace existing medical equipment or incur data access charges from medical device manufacturers who have spotted a potential new revenue model. 

There are technologies available with capabilities that help to address the interoperability challenges facing the industry. Reliability is key, and these languages enable cost-effective solutions, designed to seamlessly integrate subsystems, ensuring the efficient exchange of data without the need for a complete system overhaul. 

Organisations should always take care when implementing any sort of IoT solution and seamless, cost-effective integrations should always be top of mind.

Securely moving data

A staggering 200 million+ healthcare patient records have been exposed to data breaches in the past decade alone. The healthcare industry is positioned as the most expensive sector for the cost of data breaches 13 years in a row, according to IBM’s Cost of Data Breach report. 

Confidential patient information, financial details and other sensitive data have been compromised. This knock-on effect ultimately compromised various elements of healthcare confidentiality for both healthcare providers and patients alike. Amidst this growing challenge lies a need for secure, compliant healthcare.

Utilising Blockchain

By utilising blockchain technology, healthcare providers have access to enhanced privacy and integrity of their medical data, which minimises the associated risks of cyberattacks and security breaches.

Blockchain technology can provide a great solution for securely moving healthcare data. This is thanks to blockchain’s distributed ledger technology (DLT). This technology facilitates the secure transfer of patient’s medical records. It also helps to strengthen data defences and allows for the improved management of the medicine supply chain.

But there are incurred costs to consider. When moving data into Patient Information Systems (PIs), there may be initial upfront costs and maintenance costs to consider. Healthcare providers must weigh this against their budget and need for data security and compliance with privacy regulations.

Regulations and compliance

As well as the financial considerations, other weaknesses in blockchain technology must be considered. 

This includes a lack of standardisation, accessibility and regulatory powers. Take the regulatory body HIPAA. They have strict mandates in place to protect healthcare information. When discussing public blockchain, data privacy becomes an issue. Public blockchain is designed for transparent transactions, going against HIPAA regulations, and making public blockchains incompatible. Failure to adhere can lead to fines and various non-compliance penalties.

Moving to private blockchain also poses its obstacles: 

Issues with centralisation: Private blockchains can offer more control over data access and governance, but there are still issues surrounding who owns the data. HIPAA requires a clear centralisation of data and ownership and must be adhered to by private blockchains.

Standardised data: HIPAA requires consistent data formats to ensure an accurate data exchange. Achieving this across multiple private blockchains is difficult and could have an impact on collaboration and overall data sharing.

Interoperability: There are various stakeholders involved across many institutions such as insurers, hospitals etc, therefore interoperability is needed to have an effective exchange of data.

Leveraging innovative communication

Healthcare companies looking to manage their patient data and communications are adopting a host of apps and new comms channels to reliably share data. For example:

  • Electronic Health Records (EHR)
  • Electronic Medical Records (EMR)
  • Imaging data
  • Wearables 

But managing healthcare by these various means raises pertinent issues surrounding data security and privacy. Data needs to be stored securely and with the utmost confidentiality. Healthcare personnel must also keep on top of the latest technological advances to ensure data is not vulnerable to hacks or security breaches. But these system upgrades also come at a further long and short-term financial cost to maintain.

There are other ways to leverage secure and effective communication within the healthcare industry using different challenges, as highlighted by Pando Health.  

Developed by junior doctors and technologists, they sought to address the need for secure communication platforms for healthcare professionals. 

While they initially used a SaaS messaging platform for their prototype m they soon faced scalability limitations. Through the use of MongooseIM, an open source highly scalable messaging server, they were able to revolutionise the needs of healthcare communication, without having to replace the entire system.

The results?

  • A secure, NHS-approved chat system.
  • A medical app designed for secure and compliant communication, used by over 65,000 professionals.
  • A collaborative platform, designed for medical professionals without compromising patient security.

There are options for healthcare organisations to ensure secure data channels while complying with legislative requirements and maintaining patient confidentiality.

Being future-ready/ proof

We’ve already mentioned that data volume in healthcare will continue to expand exponentially. The challenge now lies in ensuring that healthcare providers are providing a strategic approach to brace themselves for this future growth. Lack of strategy leads to a loss of control over the access and organisation of your data, impacting those patients who need care the most.
When compared to other industries, healthcare already falls behind in the Future-ReadyBusiness Benchmark. But positive steps are being taken industry-wide to ensure 2024 specifically strengthens the healthcare industry, as we move towards digital-based healthcare, thanks to key trends and breakthrough innovations.

Implementing improved systems

Managing masses of data is becoming increasingly difficult. The need for rapid and reliable access to data combined with the need for data to be retained for extended periods of time presents some serious archival and storage challenges. A lot of these issues are near impossible with existing healthcare legacy systems.

Organisations require scalability and reliability to improve services and modernise. Many places have already started to adopt solutions to consolidate storage and data needs into long-term, future strategies. 

Some of these systems include:

  • The Internet of Medical Things – Those companies who specialise in IoMT often partner with software professionals, designed to connect to wearables- tracking key health metrics like blood pressure and heart rate in real time. 
  • Scalable telehealth services- There are various telehealth systems, based on a scalable mobile health system where data from patients is acquired and transmitted via wireless communication networks.
  • Machine learning- Algorithms offer auto-scaling to derive insights from continuously increasing healthcare data.

Adopting forward-thinking strategies becomes imperative as the healthcare industry strives to modernise and improve its services. Embracing reliable and scalable services is the only way to ensure longevity and effective management for the long-term care of patients in the digital age.

To conclude

The journey and ever-evolving complexities of healthcare data mark what we can call the Golden Age of Data Transformation. 

Accessing data wherever it is created and stored is a key priority for any digital transformation strategy. 

As we aim for the improvement of operational efficiency and patient outcomes, prioritising data quality, accessibility and interoperability of systems is non-negotiable. Organisations should focus on building scalable and robust infrastructures to tackle these challenges.

Staying flexible and investing in long-term strategies empower healthcare professionals to navigate the data landscape effectively, ultimately delivering better care for patients. 

The post The Golden Age of Data Transformation in Healthcare appeared first on Erlang Solutions.

by Erlang Solutions Team at May 14, 2024 10:23

Comparing Elixir vs Java

After many years of active development using various languages, in the past months, I started learning Elixir. I got attracted to the language after I heard and read nice things about it and the BEAM VM, but – to support my decision about investing time to learn a new language – I tried to find a comparison between Elixir and various other languages I already knew.

What I found was pretty disappointing. In most of these comparisons, Elixir performed much worse than Java, even worse than most of the mainstream languages. With these results in mind, it became a bit hard to justify my decision to learn a new language with such a subpar performance, however fancy its syntax and other features were. After delving into the details of these comparisons, I realised that all of them were based on simplistic test scenarios and specialised use cases, basically a series of microbenchmarks (i.e. small programs created to measure a narrow set of metrics, like execution time and memory usage). It is obvious that the results of these kinds of benchmarks are rarely representative of real-life applications.

My immediate thought was that a more objective comparison would be useful not only for me but for others as well. But before discussing the details, I’d like to compare several aspects of Elixir and Java that are not easily quantifiable.

Development

Learning curve

Before I started learning Elixir, I used various languages like Java, C, C++, Perl, and Python. Despite that, all of them are imperative languages and Elixir is a functional language, I found the language concepts clear and concise, and – to tell the truth – much less complex than Java. Similarly, Elixir syntax is less verbose and easier to read and see through.

When comparing language complexities, there is an often forgotten, but critical thing: It’s hard to develop anything more complex than a Hello World application just by using the core language. To build enterprise-grade software, you should use at least the standard library, but in most cases, many other 3rd party libraries. They all contribute to the learning curve.

In Java, the standard library is part of the JDK and provides basic support for almost every possible use, but lacked the most important thing, the component framework (like Spring Framework or OSGi), for about 20 years. During that time, several good component frameworks were developed and became widespread, but they all come with different design principles, configuration and runtime behaviour, so for a novice developer, the aggregated learning curve is pretty steep.On the other side, Elixir has the OTP from the beginning, a collection of libraries once called Open Telecom Platform. OTP provides its own component framework which shares the same concepts and design principles as the core language.

Documentation

I was a bit spoiled by the massive amount of tutorials, guides and forum threads of the Java ecosystem, not to mention the really nice Javadoc that comes with the JDK. It’s not that Elixir lacks the appropriate documentation, there are really nice tutorials and guides, and most of the libraries are comparably well documented as their Java counterparts, but it will take time for the ecosystem to reach the same level of quality. There are counterexamples, of course, the Getting Started Guide is a piece of cake, I didn’t need anything else to learn the language and start active development.

IDE support

For me as a novice Elixir developer, the most important roadblock was the immature IDE support. Although I understand that supporting a dynamically typed language is much harder than a statically typed one like Java, I’m missing the most basic refactoring support from both the IntelliJ IDEA and VSCode. I know that Emacs offers more features, but being a hardcore vi user, I kept some distance from it.

Fortunately, these shortcomings can be improved easily, and I’m sure there are enough interested developers in the open-source world, but as usual, some coordination would be needed to facilitate the development.

Programming model

Comparing entire programming models of two very different languages is too much for a blog entry, so I’d like to focus on the language support for performance and reliability, more precisely several aspects of concurrency, memory management and error handling.

Concurrency and memory management

The Java Memory Model is based on POSIX Threads (pthreads). Heap memory is allocated from a global pool and shared between threads. Resource synchronisation is done using locks and monitors. A conventional Java thread (Platform Thread) is a simple wrapper around an OS thread. Since an OS thread comes with its own large stack and is scheduled by the OS, it is not lightweight in any way. Java 21 introduced a new thread type (Virtual Thread) which is more lightweight and scheduled by the JVM, so it can be suspended during a blocking operation, allowing the OS thread to mount and execute another Virtual Thread. Unfortunately, this is only an afterthought. While it can improve the performance of many applications, it makes the already complex concurrency model even more complicated. The same is true for Structured Concurrency. While it can improve reliability, it will also increase complexity, especially if it is mixed with the old model. This is also true for the 3rd party libraries, adopting the new features, and upgrading the old deployments will take time, typically years. Until that, a mixed model will be used which can introduce additional issues.

There are several advantages of adopting POSIX Threads, however: it is familiar for developers of languages implementing similar models (e.g. C, C++ etc.), and keeps the VM internals fairly simple and performant. On the other hand, this model makes it hard to effectively schedule tasks and heavily constrains the design of reliable concurrent code. And most importantly, it introduces issues related to concurrent access to shared resources. These issues can materialise in performance bottlenecks and runtime errors that are hard to debug and fix.

The concurrency model of Elixir is based on different concepts, introduced by Erlang in the 80s. Instead of scheduling tasks as OS threads, it uses a construct called “process”, which is different from an operating system process. These processes are very lightweight, operating on independently allocated/deallocated memory areas and are scheduled by the BEAM VM. Scheduling is done by multiple schedulers, one for each CPU core. There is no shared memory, synchronised resource access, or global garbage collection, inter-process communication is performed using asynchronous signalling. This model eliminates the conventional concurrency-related problems and makes it much easier to write massively concurrent, scalable applications. There is one drawback, however: due to these conceptual differences, the learning curve is a bit steeper for developers experienced only with pthreads-related models.

Fault tolerance

Error recovery and fault tolerance in general are underrated in the world of enterprise software. For some reason, we think that fault tolerance is for mission-critical applications like controlling nuclear power plants, running medical devices or managing aircraft avionics. In reality, almost every business has critical software assets and applications that should be highly available or data, money and consumer trust will be lost. Redundancy may prevent critical downtimes, but no amount of redundancy can mitigate the risk of data corruption or other similar errors, not to mention the cost of duplicated resources.

Java and Elixir handle errors in very different ways. While Java follows decades-old conventions and treats errors as exceptional situations, Elixir inherited a far more powerful concept from Erlang, originally borrowed from the field of fault-tolerant systems. In Elixir, errors are part of the normal behaviour of the application and are treated as such. Since there are no shared resources between processes, an error during the execution of a process does not affect nor propagate to the others; their states remain consistent, so the application can safely recover from the error. In addition, supervision trees can make sure that the failed components will be replaced immediately.

This way, the BEAM VM provides guarantees against data loss during error recovery. But this kind of error recovery is possible only if no errors can leave the system in an inconsistent state. Since Java relies on OS threads, and shared memory can’t be protected from incorrectly behaving threads, under the JVM, there are no such safeties. Although there are Java libraries that provide better fault tolerance by implementing different programming models (probably the most noteworthy is Akka, implementing the Actor Model), the number of 3rd party libraries supporting these programming models is very limited.

Runtime

Performance

For CPU or memory-intensive tasks, Java is a good choice, due to several things, like a more mature Just In Time compiler and tons of runtime optimisations in the JVM, but most importantly, because of its memory model. Since memory allocation and thread handling are basically done on OS level, the management overhead is very low.

On the other hand, this advantage vanishes when concurrent execution is paired with a mixed workload, like blocking operations and data exchange between concurrent tasks. This is the field where Elixir thrives since Erlang and the BEAM VM were originally designed for these kinds of tasks. Due to the well-designed concurrency model, memory and other resources are not shared, requiring no synchronisation. BEAM processes are more lightweight than Java threads, and their scheduling is done at VM level, leading to fewer context switches and better scheduling granularity.

Concurrent operations also affect memory use. Since a Java thread is not lightweight, the more threads are waiting for execution, the more memory is used. In parallel with the memory allocations related to the increasing number of waiting threads, the overhead caused by garbage collection also grows.

Today’s enterprise applications are usually network-intensive. We have separate databases, microservices, clients accessing our services via REST APIs etc. Compared to operations on in-memory data, network communication is many orders of magnitude slower, latency is not deterministic, and the probability of erroneous responses, timeouts or infrastructure-related errors is not negligible. In this environment, Elixir and the BEAM VM offer more flexibility and concurrent performance than Java.

Scalability

When we talk about scalability, we should mention both vertical and horizontal scalability. While vertical scalability is about making a single hardware bigger and stronger, horizontal scalability deals with multiple computing nodes.

Java is a conventional language in a sense that is built for vertical scaling, but it was designed at a time when vertical scaling meant running on bigger hardware with better single-core performance. It performs reasonably well on multi-core architectures, but its scalability is limited by its concurrency model since massive concurrency comes with frequent cache invalidations and lock contention on shared resources. Horizontal scaling enlarges these issues due to the increased latency. Moreover, since the JVM was also designed for vertical scaling, there is no simple way to share or distribute workload between multiple nodes, it requires additional libraries/frameworks, and in many cases, different design principles and massive code changes.

On the other hand, a well-designed Elixir application can scale up seamlessly, without code changes. There are no shared resources that require locking, and asynchronous messaging is perfect for both multi-core and multi-node applications. Of course, Elixir itself does not prevent the developers from introducing features that are hard to scale or require additional work, but the programming model and the OTP make horizontal scaling much easier.

Energy efficiency

It is a well-known fact that resource and energy usage are highly correlated metrics. However, there is another, often overlooked factor that contributes significantly to energy usage. The concurrency limit is the number of concurrent tasks an application can execute without having stability issues. Near the concurrency limit, applications begin to use the CPU excessively, therefore the overhead of context switches begins to matter a lot. Another consequence is the increased memory usage, caused by the growing number of tasks waiting for CPU time. Since frequent context switches are also memory intensive, we can safely say that applications become much less energy efficient near the concurrency limit.

Maintenance

Tackling concurrency issues is probably the hardest part of any maintenance task. We certainly collect metrics to see what is happening inside the application, but these metrics often fail to provide enough information to identify the root cause of concurrency problems. We have to trace the execution flow to get an idea of what’s going on inside. Profiling or debugging of such issues comes with a certain cost: using these tools may alter the performance behaviour of the system in a way that makes it hard to reproduce the issue or identify the root cause.

Due to the message-passing concurrency model, the code base of a typical concurrent Elixir application is less complex and free from resource-sharing-related implementation mistakes often poisoning Java code, eliminating the need for this kind of maintenance. Also, the BEAM VM is designed with traceability in mind, leading to lower performance cost of tracing the execution flow.

Dependencies

Most of the enterprise applications heavily depend on 3rd party libraries. In the Java ecosystem, even the component framework comes from a 3rd party, with its own dependencies on other 3rd party libraries. This creates a ripple effect that makes it hard to upgrade just one component of such a system, not to mention the backward incompatible changes potentially introduced by newer 3rd party components. Anyone who has tried to upgrade a fairly large Maven project could tell stories about this dependency nightmare.

The Elixir world is no different, but the number of required 3rd party libraries can be much smaller since the BEAM VM and the OTP provide a few useful things (like the component platform, asynchronous messaging, seamless horizontal scalability, supervision trees), functionality that is very often used and can only be found in 3rd party libraries for Java.

Let’s get more technical

As I mentioned before, I was not satisfied with other language comparisons as they are usually based on simplistic or artificial test cases, and wanted to create something that mimics a common, but easy-to-understand scenario, and then measure the performance and complexity of different implementations. Although real-world performance is rarely just a number, it is a composite of several metrics like CPU and memory usage, I/O and network throughput, I tried to quantify the performance using the processing time, and the time an application needs to finish a task. Another important aspect is code complexity since it contributes to development and maintenance costs. The size and complexity of the implementations also matter since these factors contribute to the development and maintenance costs.

Test scenario

Most real-world applications process data in a concurrent way. These data originate from a database or other kind of backends, microservice or from a 3rd party service. In any way, data is transferred via a network. In the enterprise world, the dominating way of network communication is via HTTP, often part of a REST workflow. That is the reason why I chose to measure how fast and reliable REST clients can be implemented in Elixir and Java, and in addition, how complex each implementation is.

The workflow starts with reading a configuration from a disk and then gathering data according to the configuration using several REST API calls. There are dependencies in between workflow steps, so several of them can’t be done concurrently, while the others can be done in parallel. The final step is to process the received data.

The actual scenario is to evaluate rules, where each rule contains information used to gather data from 3rd party services and predict utility stock prices based on historical weather, stock price and weather forecast data.

Rule evaluation is done in a concurrent manner. Both the Elixir and Java implementation are configured to evaluate 2 rules concurrently.

Implementation details

Elixir

The Elixir-based REST client is implemented as an OTP application. I tried to minimise the external dependencies since I’d like to focus on the performance of the language and the BEAM VM, and the more 3rd party libraries the application depends on, the more probable there’ll be some kind of a bottleneck.

The dependencies I use:

  • Finch: a very performant HTTP client
  • Jason: a fast JSON parser
  • Benchee: a benchmarking tool

Each concurrent task is implemented as a process, and data aggregation is done using asynchronous messaging. The diagram below shows the rule evaluation workflow.

There are altogether 8 concurrent processes in each task, one process is spawned for each rule, and then 3 processes are started to retrieve stock, historical weather and weather prediction data.

Java

The Java-based REST client is implemented as a standalone application. Since the Elixir application uses OTP, the fair comparison would be to use some kind of a component framework, like Spring or OSGi, since both are very common in the enterprise world. However, I decided not to use them, as they both would contribute heavily to the complexity of the application, although they wouldn’t change the performance profile much.

Dependencies:

There are two implementations of concurrent task processing. The first one uses two platform thread pools, for rule processing and for retrieving weather data. This might seem a bit naive, as this workflow could be optimised better, but please keep in mind that

  1. I wanted to model a generic workflow, and it is quite common that several thread pools are used for concurrent processing of various tasks.
  2. My intent was to find the right balance between implementation complexity and performance.

The other implementation uses Virtual Threads for rule processing and weather data retrieval.

The diagram below shows the rule evaluation workflow.

There are altogether 6 concurrent threads in each task, one thread is started for each rule, and then 2 threads are started to retrieve historical weather and weather prediction data.

Results

Hardware

Google Compute Node

  • CPU Information: AMD EPYC 7B12
  • Number of Available Cores: 8
  • Available memory: 31.36 GB

TasksElixirJavaPlatform ThreadsJavaVirtual Threads
3202.52 s2.52 s2.52 s
6402.52 s2.52 s2.52 s
12802.51 s2.52 s, 11% error2.52 s
25605.01 s2.52 s, 7 errors
51205.01 sHigh error rate
102405.02 s
204807.06 s

Detailed results

Elixir

  • Elixir 1.16.2
  • Erlang 26.2.4
  • JIT enabled: true
Concurrent tasks per minuteAverageMedian99th %Remarks
52.5 s2.38 s3.82 s
102.47 s2.38 s3.77 s
202.47 s2.41 s3.77 s
402.5 s2.47 s3.79 s
802.52 s2.47 s3.82 s
1602.52 s2.49 s3.78 s
3202.52 s2.49 s3.77 s
6402.52 s2.47 s3.81 s
12802.51 s2.47 s3.8 s
25605.01 s5.0 s5.17 s
38405.01 s5.0 s5.11 s
51205.01 s5.0 s5.11 s
102405.02 s5.0 s5.15 s
151205.53 s5.56 s5.73 s
204807.6 s7.59 s8.02 s

Java 21, Platform Threads

  • OpenJDK 64-Bit Server VM, version 21
Concurrent tasks per minuteAverageMedian99th %Remarks
52.5 s2.36 s3.71 s
102.54 s2.48 s3.69 s
202.5 s2.5 s3.8 s
402.56 s2.45 s3.84 s
802.51 s2.46 s3.8 s
1602.5 s2.5 s3.79 s
3202.52 s2.46 s3.8 s
6402.52 s2.48 s3.8 s
12802.52 s2.47 s3.8 s11% HTTP timeouts

Java 21, Virtual Threads

  • OpenJDK 64-Bit Server VM, version 21
Concurrent tasks per minuteAverageMedian99th %Remarks
52.46 s2.49 s3.8 s
102.51 s2.52 s3.68 s
202.56 s2.44 s3.79 s
402.53 s2.46 s3.8 s
802.52 s2.48 s3.79 s
1602.52 s2.49 s3.77 s
3202.52 s2.48 s3.8 s
6402.52 s2.49 s3.8 s
12802.52 s2.48 s3.8 s
25602.52 s2.48 s3.8 sErrors: 7 (HTTP client EofException)
3840N/AN/AN/ALarge amount of HTTP timeouts

Stability

Under high load, strange things can happen. Concurrency (thread contentions, races), operating system or VM-related (resource limits) and hardware-specific (memory, I/O, network etc.) errors may occur anytime. Many of them cannot be handled by the application, but the runtime usually can (or should) deal with them to provide reliable operation even in the presence of faults.

During the test runs, my impression was that the BEAM VM is superior in this task, in contrast to the JVM which entertained me with various cryptic error messages, like the following one:

java.util.concurrent.ExecutionException: java.io.IOException
        at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
        at esl.tech_shootout.RuleProcessor.evaluate(RuleProcessor.java:38)
        at esl.tech_shootout.RuleProcessor.lambda$evaluateAll$0(RuleProcessor.java:29)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.io.IOException
        at java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:586)
        at java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:123)
        at esl.tech_shootout.RestUtils.callRestApi(RestUtils.java:21)
        at esl.tech_shootout.StockService.stockSymbol(StockService.java:23)
        at esl.tech_shootout.StockService.stockData(StockService.java:17)
        at esl.tech_shootout.RuleProcessor.lambda$evaluate$3(RuleProcessor.java:37)
        ... 4 more
Caused by: java.nio.channels.ClosedChannelException
        at java.base/sun.nio.ch.SocketChannelImpl.ensureOpen(SocketChannelImpl.java:195)

Although in this case, I know the cause of this error, the error message is not very informative. Compare the above stack trace with the error raised by Elixir and the BEAM VM:

16:29:53.822 [error] Process #PID<0.2373.0> raised an exception
** (RuntimeError) Finch was unable to provide a connection within the timeout due to excess queuing for connections. Consider adjusting the pool size, count, timeout or reducing the rate of requests if it is possible that the downstream service is unable to keep up with the current rate.

    (nimble_pool 1.0.0) lib/nimble_pool.ex:402: NimblePool.exit!/3
    (finch 0.18.0) lib/finch/http1/pool.ex:52: Finch.HTTP1.Pool.request/6
    (finch 0.18.0) lib/finch.ex:472: anonymous fn/4 in Finch.request/3
    (telemetry 1.2.1) /home/sragli/git/tech_shootout/elixir_demo/deps/telemetry/src/telemetry.erl:321: :telemetry.span/3
    (elixir_demo 0.1.0) lib/elixir_demo/rule_processor.ex:56: ElixirDemo.RuleProcessor.retrieve_weather_data/3

This exception shows what happens when we mix different concurrency models:

Thread[#816,HttpClient@6e579b8-816,5,VirtualThreads]
 at java.base@21/jdk.internal.misc.Unsafe.park(Native Method)
 at java.base@21/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:269)
 at java.base@21/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1758)
 at app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:219)
 at app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:1139)


The Jetty HTTP client is a nice piece of code and very performant, but uses platform threads in its internals, while our benchmark code relies on virtual threads.

That’s why I had to switch from JDK HttpClient to Jetty:

Caused by: java.io.IOException: /172.17.0.2:60876: GOAWAY received
       at java.net.http/jdk.internal.net.http.Http2Connection.handleGoAway(Http2Connection.java:1166)
       at java.net.http/jdk.internal.net.http.Http2Connection.handleConnectionFrame(Http2Connection.java:980)
       at java.net.http/jdk.internal.net.http.Http2Connection.processFrame(Http2Connection.java:813)
       at java.net.http/jdk.internal.net.http.frame.FramesDecoder.decode(FramesDecoder.java:155)
       at java.net.http/jdk.internal.net.http.Http2Connection$FramesController.processReceivedData(Http2Connection.java:272)
       at java.net.http/jdk.internal.net.http.Http2Connection.asyncReceive(Http2Connection.java:740)
       at java.net.http/jdk.internal.net.http.Http2Connection$Http2TubeSubscriber.processQueue(Http2Connection.java:1526)
       at java.net.http/jdk.internal.net.http.common.SequentialScheduler$LockingRestartableTask.run(SequentialScheduler.java:182)
       at java.net.http/jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.run(SequentialScheduler.java:149)
       at java.net.http/jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:207)
       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
       at java.base/java.lang.Thread.run(Thread.java:1583)

According to the HTTP 2.0 standard, an HTTP server can send a GOAWAY response at any time (typically under high load, in our case, after about 2000 requests/min) to indicate connection shutdown. It is the client’s responsibility to handle this situation. The HttpClient implemented in the JDK fails to do that internally and it does not provide enough information to make the proper error handling possible.

Concluding remarks

As I expected, both the Elixir and Java applications performed well in low concurrency settings, but the Java application became less stable as the number of concurrent tasks increased, while Elixir exhibited rock-solid performance with minimal slowdown.

The BEAM VM was also superior in providing reliable operation under high load, even in the presence of faults. After about 2000 HTTP requests per second, timeouts were inevitable, but they didn’t impact the stability of the application. On the other hand, the JVM started to behave very erratically after about 1000 (Platform Threads-based implementation) or 3000 (Virtual Threads-based implementation) concurrent tasks.

Code complexity

There are a few widely accepted complexity metrics to quantify code complexity, but I think the most representative ones are the Lines of Code and Cyclomatic Complexity.

Lines of Code, more precisely the Source Lines of Code (SLoC for short) quantifies the total number of lines in the source code of an application. Strictly speaking, it is not very useful as a complexity measure, but it is a good indicator of how much effort is needed to look through a particular codebase. Source Lines of Code is measured by calculating the total number of lines in all source files, not including the dependencies and configuration files.

Cyclomatic Complexity (CC for short) is more technical as it measures the number of independent execution paths through the source code. CC measurement works in a different way for each language. Cyclomatic Complexity of the Elixir application is measured using Credo, and CC of the Java application is quantified using the CodeMetrics plugin of IntelliJ IDEA.

These numbers show that there is a clear difference in complexity even between such small and simple applications. While 9 is not a particularly high score for Cyclomatic Complexity, it indicates that the logical flow is not simple. It might not be concerning, but what’s more problematic is that even the most basic error handling increases the complexity by 3.

Conclusion

These results might paint a black-and-white picture, but keep in mind that both Elixir and Java have their advantages and shortcomings. If we are talking about CPU or memory-intensive operations in low concurrency mode, Java is the clear winner thanks to its programming model and the huge amount of optimisation done in the JVM. On the other hand, Elixir is a better choice for highly concurrent, available and scalable applications, not to mention the field of fault-tolerant systems. Elixir also has the advantage of being a more modern language with less syntactic clutter and less need to write boilerplate code.

The post Comparing Elixir vs Java  appeared first on Erlang Solutions.

by Attila Sragli at May 14, 2024 09:39

May 09, 2024

Erlang Solutions

A Comprehensive Guide to Elixir vs Ruby

Deciding what programming language is best for your long-term business strategy is a difficult decision. Suppose you’re tossing the coin between Elixir and Ruby, or considering making a shift from one to the other. In that case, you probably have a lot of questions about both languages, which we will compare for you in the latest Elixir vs Ruby guide.

Let’s explore the advantages and disadvantages of each language, as well as their optimal use cases and other key points, providing you with a clearer insight into both.

Elixir vs Ruby: A History

To gain a better understanding of the frequent Ruby and Elixir comparisons, let’s take it back to the 90’s when Ruby was created by Yukihiro Matsumoto. He combined the best features of Small, Perl, Eiffel, Ada, Lip and Smalltalk languages to simplify the tasks of developers. But Ruby’s popularity surged with the release of the open-source framework Ruby on Rails.

This launch proved to be revolutionary in the world of web development, making code tasks achievable in a matter of days instead of months. As one of the leading figures on the Rails Core Team, Jose Valim recognised the potential for evolution within the Ruby language.

In 2012, Elixir was born- a functional programming language, built on the Erlang virtual machine (VM). The aim of Elixir was to create a language with the friendly syntax of Ruby while boasting fault tolerance, concurrency capabilities and a commitment to developer satisfaction.

The Elixir community also has Phoenix, an open-source framework from Phoenix’s creator Chris McCord. Working with Jose Valim and implementing the core values from Ruby on Rails perfected a much more effective framework for the Elixir ecosystem.

So what is Elixir?

Elixir describes itself as a “dynamic, functional language for building scalable and maintainable applications.” It is a great choice for any situation where scalability, performance and productivity are priorities, particularly within IoT endeavours and web applications.

Elixir runs on the BEAM virtual machine, originating from Erlang’s virtual machine (VM). It is well known for managing fault-tolerant, low-latency distributed systems. Created in 1986 by the Ericsson company, Erlang was designed to address the growing demands within the telecoms industry.

It was later released as free and open-source software in 1998 and has since grown in popularity thanks to the demand for concurrent services.  If you would like a more detailed breakdown explaining the origins and current state of the Elixir programming language, check out our “What is Elixir” post in full.

What is Ruby?

Ruby stands out as a highly flexible programming language. Developers who code in Ruby are able to make changes to its functionality. Unlike compiled languages like C or C++, Ruby is an interpreted language, similar to Python.

But unlike Python which focuses on a singular, definitive solution for every problem, Ruby projects try to take on multiple problem-solving approaches. Depending on your project,  this approach has pros and cons. 

One hallmark of Ruby is its user-friendly nature. It hides a lot of intricate details from the programmer, making it much easier to use compared to other popular languages. But it also means that finding bugs in code can be harder. 

There is a major convenience factor to coding in Ruby. Any code that you write will run on any major operating system such as macOR, Windows, and Linux, without having to be ported.

The pros of Elixir

If you’re considering migrating from Ruby to Elixir, you’ll undoubtedly be looking into its benefits and some key advantages it has over other languages. So let’s jump into some of its most prominent features.

Built on the BEAM

As mentioned, Elixir operates on the Erlang virtual machine (BEAM). It has a long history as one of the oldest VMs in IT history and remains widely used. The Erlang VM BEAM is ideal for managing and building systems with concurrent connections. 

Immutable Data

A major advantage of Elixir is its support for immutable data, which simplifies code understanding. Elixir ensures that data is unchanged once it has been defined, enhancing code reliability by preventing unexpected changes to variables, and making for easier debugging.

Top Performance

Elixir offers amazing performance. Phoenix framework is the most popular web development framework in Elixir, boasting remarkable speed and response times (a matter of milliseconds). While Rails isn’t a slow system either, Elixir’s performance just edges it out, making it a superior choice. We’ve previously made the case for Elixir’s great performance making it one of the fastest programming languages in our previous post.

Parallelism

Parallel systems often have latency and responsiveness challenges due to how much computer power is required for a single task. But Elixir addresses this with its very clever process scheduler, which proactively reallocates control to different processes.  

So even under heavy loads, a slow process isn’t able to significantly impact the overall performance of an Elixir application. This capability ensures low latency, a key requirement for modern web applications.

Highly Fault-Tolerant

In most programming languages, when a bug is identified in one process, it crashes the whole application. But Elixir handles this differently. It has unmatched fault tolerance. 

A fan favourite of the language, Elixir inherits Erlang’s “let it crash” philosophy, allowing processes to restart after a critical failure. This eliminates the need for complex recovery strategies.

Distributed Concurrency

Elixir supports code concurrency, allowing you to run concurrent connections on a single computer and multiple machines. 

Scalability

Elixir gets the most out of a single machine perfect for systems or applications that need to scale or maintain traffic. Thanks to its architecture, there’s no need to add servers to accommodate demand continuously.

The pros of Ruby

Now let’s explore the benefits that Ruby has to offer. There are massive advantages for developers, from its expansive library ecosystem to its user-friendly syntax and supportive community.

Huge ecosystem

Not only is Ruby a very user-friendly programming language, but it also boasts a vast library ecosystem. Whatever feature you want to implement, there’s likely something available to help you develop swift applications.

Easy to work with

The founder of Ruby’s aim was to make development a breeze and pleasant for users. For this reason, Ruby is straightforward, clean and has an easily understandable syntax. This makes for very easy and productive development, which is why it remains such a popular choice with developers.

Helpful, vibrant community

The Ruby community is a vibrant one that thrives with the consistent publishing of readily available solutions that are open to the public, like their ever-popular Ruby community page. This environment is very advantageous for new developers, who can easily seek assistance and valuable solutions online.

Commitment to standards

Ruby offers strong support for web standards across all aspects of an application, from its user interface to data transfer.

When building an application with Ruby, developers adhere to already established software design principles such as  “coding by convention,” “don’t repeat yourself,” and the “active record pattern.”

So why are all of these points considered so advantageous to Ruby?

Firstly, it simplifies the learning curve for beginners, designed to enhance the professional experience. It also lends itself to better code readability, which is great for collaboration and developers and finally, it reduces the amount of code needed to implement features.

Elixir v Ruby: Key differences

There are some significant differences between the two powerhouse languages. While Elixir and Ruby are both versatile, dynamic languages, unlike Ruby, Elixir code undergoes ahead-of-time compilation to Erlang VM (virtual machine) bytecode, which enhances its single-core performance substantially. Elixir’s focus is on code readability and expressiveness, while its robust macro system facilitates easy extensibility.

Elixir and Ruby’s syntax also differ in several ways. For instance, Elixir uses pipes (marked by |> operator) to pass the outcome of one expression as the initial argument to another function, while Ruby employs “.” for method chaining.

Also, Elixir provides explicit backing for immutable data structures, a feature not directly present in Ruby. It also offers first-rate support for typespecs, a capability lacking in Ruby.

Best use for developers

Elixir is a great option for developers who want the productivity of Ruby and the scalability of Elixir. It also performs just as well as Ruby for Minimum Viable Products (MVPs) and startups for larger applications, while demonstrating robust scalability for extensive applications.

For companies who want swift delivery without sacrificing quality, Elixir works out as a great overall choice.

Exploring the talent pool for Elixir and Ruby

Elixir is a newer language than Ruby and therefore has a smaller pool of developers. But let’s not forget it’s also a functional language, and functional programming typically demands a different way of thinking compared to object-oriented programming. 

As a result, Elixir developers tend to have more experience and understanding of programming concepts. And the Elixir community is also rapidly growing. Those who are familiar with Ruby commonly make the switch to Elixir.

Although Elixir developers might be more difficult to find, once you do, they are worth their weight. 

Who is using Elixir and Ruby?

Let’s take a look at some highly successful companies that have used Ruby and Elixir:

Elixir

Discord: Uses Elixir for its real-time messaging infrastructure, benefiting from Elixir’s concurrency and fault tolerance.

Pinterest: Takes advantage of Elixir’s scalability and fault-tolerance features.

Bleacher Report: Bleacher Report, a sports news website, utilizes Elixir for its backend services, including real-time updates and notifications.

Moz: Uses Elixir for its backend services, benefiting from its concurrency model and fault tolerance.

Toyota Connected: Leverages Elixir for building scalable and fault-tolerant backend systems for connected car applications.

Ruby

Airbnb: Uses Ruby on Rails for its web platform, including features like search, booking, and reviews.

GitHub: Is built primarily using Ruby on Rails.

Shopify: Relies on Ruby on Rails for its backend infrastructure.

Basecamp: Built using Ruby on Rails.

Kickstarter: Uses Ruby on Rails for its website and backend services.

So, what to choose?

Migrating or simply deciding between programming languages presents an opportunity for enhancing performance, scalability, and robustness. But it is a journey. One that requires careful planning and execution and achieve the best long-term results for your business.

While the Ruby community offers longevity, navigating outdated solutions can be a challenge. Nonetheless, the overlap of the Ruby and Elixir communities fosters a supportive environment for transitioning from one to the other. Elixir provides a learning curve that may deter some, but for developers seeking typed languages and parallel computing benefits, it is invaluable.

If you’re already working with existing Ruby infrastructure, incorporating Elixir to address scaling and reliability issues is a viable option. The synergies between the two languages promote a seamless transition. 

Ultimately, while Ruby remains a solid choice, the advantages of Elixir make it a compelling option worth considering for future development and business growth. You can learn more about our Elixir offering on our Elixir page, or by contacting our team directly.

The post A Comprehensive Guide to Elixir vs Ruby appeared first on Erlang Solutions.

by Erlang Solutions Team at May 09, 2024 10:21

May 05, 2024

The XMPP Standards Foundation

The XMPP Newsletter April 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

Welcome to the XMPP Newsletter, great to have you here again! This issue covers the month of April 2024.

XSF Announcements

If you are interested to join the XMPP Standards Foundation as a member, please apply until 19th May 2024!.

XMPP and Google Summer of Code 2024

The XSF has been accepted as a hosting organisation at GSoC in 2024 again! These XMPP projects have received a slot and are in the community bonding phase now:

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • XMPP Sprint in Berlin: On Friday, 12th to Sunday, 14th of July 2024.
  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

Articles

Software News

Clients and Applications

Servers

Libraries & Tools

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

  • No XEP was proposed this month.

New

  • No new XEPs this month.

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 0.7.0 of XEP-0333 (Displayed Markers)
    • Change title to “Displayed Markers”
    • Bring back Service Discovery feature (dg)
  • Version 0.4.1 of XEP-0440 (SASL Channel-Binding Type Capability)
    • Recommend the usage of tls-exporter over tls-server-end-point (fs)
  • Version 0.2.1 of XEP-0444 (Message Reactions)
    • fix grammar and spelling (wb)
  • Version 1.0.1 of XEP-0388 (Extensible SASL Profile)
    • Fixed typos (md)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • XEP-0398: User Avatar to vCard-Based Avatars Conversion

Stable

  • Version 1.0.0 of XEP-0386 (Bind 2)
    • Accept as Stable as per Council Vote from 2024-04-02. (XEP Editor (dg))
  • Version 1.0.0 of XEP-0388 (Extensible SASL Profile)
    • Accept as Stable as per Council Vote from 2024-04-02. (XEP Editor (dg))
  • Version 1.0.0 of XEP-0333 (Displayed Markers)
    • Accept as Stable as per Council Vote from 2024-04-17. (XEP Editor (dg))
  • Version 1.0.0 of XEP-0334 (Message Processing Hints)
    • Accept as Stable as per Council Vote from 2024-04-17 (XEP Editor (dg))

Deprecated

  • No XEP deprecated this month.

Rejected

  • XEP-0360: Nonzas (are not Stanzas)

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers an more languages are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Gonzalo Raúl Nemmi, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

May 05, 2024 00:00

May 02, 2024

Erlang Solutions

Naming your Daemons

Within Unix systems, a daemon is a long-running background process which does not directly interact with users. Many similar processes exist within a BEAM application. At times it makes sense to name them, allowing sending messages without requiring the knowledge of their process identifier (aka PID). There are several benefits to naming processes, these include: 

  1. Organised processes: using a descriptive and meaningful name organises the processes in the system. It clarifies the purpose and responsibilities of the process.
  2. Fault tolerance: when a process is restarted due to a fault it has to share its new PID to all callees. A registered name is a workaround to this. Once the restarted process is re-registered there is no additional action required and messages to the registered process resume uninterrupted.
  3. Pattern implementation: a Singleton, Coordinator, Mediator or Facade design pattern commonly has one registered process acting as the entry point for the pattern.

Naming your processes

Naturally, both Elixir and Erlang support this behaviour by registering the process. One downside with registering is requiring an atom. As a result, there is an unnecessary mapping between atoms and other data structures, typically between strings and atoms. 

To get around this is it a common pattern to perform the registration as a two-step procedure and manage the associations manually, as shown in below:


#+begin_src Elixir
{:ok, pid} = GenServer.start_link(Worker, [], [])
register_name(pid, "router-thuringia-weimar")

pid = whereis_name("router-thuringia-weimar")
GenServer.call(pid, msg)

unregister_name("router-thuringia-weimar")
#+end_src


Figure 1

Notice the example uses a composite name: built up from equipment type, e.g. router, state, e.g. Thuringia, and city, e.g. Weimar. Indeed, this pattern is typically used to address composite names and in particular dynamic composite names. This avoids the issue of the lack of atoms garbage collection in the BEAM.

As a frequently observed pattern, both Elixir and Erlang offer a convenient method to accomplish this while ensuring a consistent process usage pattern. In typical Elixir and Erlang style, this is subtly suggested in the documentation through a concise, single-paragraph explanation.

In this write- up, we will demonstrate using built-in generic server options to achieve similar behaviour.

Alternative process registry

According to the documentation, we can register a GenServer into an alternative process registry using the via directive.

 The registry must provide the following callbacks:

register_name/2, unregister_name/1, whereis_name/1, and send/2.

As it happens there are two commonly available applications which satisfy these requirements: gproc and Registry. gproc is an external Erlang library written by Ulf Wiger, while Registry is a built-in Elixir library.

gproc is an application in its own right, simplifying using it. It only needs to be started as part of your system, whereas Registry requires adding the Registry GenServer to your supervision tree. 

We will be using gproc in the examples below to address the needs of both Erlang and Elixir applications. 

To use gproc we have to add it to the project dependency.

Into Elixir’s mix.exs:

#+begin_src Elixir
  defp deps do
    [
      {:gproc, git: "https://github.com/uwiger/gproc", tag: "0.9.1"}
    ]
  end
#+end_src

Figure 2

Next, we change the arguments to start_link, call and cast to use the gproc alternative registry, as listed below:

#+begin_src Elixir :noweb yes :tangle worker.ex
defmodule Edproc.Worker do
  use GenServer

  def start_link(name) do
    GenServer.start_link(__MODULE__, [], name: {:via, :gproc, {:n, :l, name}})
  end

  def call(name, msg) do
    GenServer.call({:via, :gproc, {:n, :l, name}}, msg)
  end

  def cast(name, msg) do
    GenServer.cast({:via, :gproc, {:n, :l, name}}, msg)
  end

  <<worker-gen-server-callbacks>>
end
#+end_src

Figure 3

As you can see the only change is using {:via, :gproc, {:n, :l, name}} as part of the GenServer name. No additional changes are necessary. Naturally, the heavy lifting is performed inside gproc.

The tuple {:n, :l, name} is specific for gproc and refers to setting up a “l:local n:name” registry. See the gproc for additional options.

Finally, let us take a look at some examples.

Example

In an Elixir shell:

#+begin_src Elixir
iex(1)> Edproc.Worker.start_link("router-thuringia-weimar")
{:ok, #PID<0.155.0>}
iex(2)> Edproc.Worker.call("router-thuringia-weimar", "hello world")
handle_call #PID<0.155.0> hello world
:ok
iex(4)> Edproc.Worker.start_link({:router, "thuringia", "weimar"})
{:ok, #PID<0.156.0>}
iex(5)> Edproc.Worker.call({:router, "thuringia", "weimar"}, "reset-counter")
handle_call #PID<0.156.0> reset-counter
:ok
#+end_src

Figure 4

As shown above, it is also possible to use a tuple as a name. Indeed, it is a common pattern to categorise processes with a tuple reference instead of constructing a delimited string.

Summary

The GenServer behaviour offers a convenient way to register a process with an alternative registry such as gproc. This registry permits the use of any BEAM term instead of the usual non-garbage collected atom name enhancing the ability to manage process identifiers dynamically. For Elixir applications, using the built-in Registry module might be a more straightforward and native choice, providing a simple yet powerful means of process registration directly integrated into the Elixir ecosystem.

Appendix

#+NAME: worker-gen-server-callbacks
#+BEGIN_SRC Elixir
  @impl true
  def init(_) do
    {:ok, []}
  end

  @impl true
  def handle_call(msg, _from, state) do
    IO.puts("handle_call #{inspect(self())} #{msg}")
    {:reply, :ok, state}
  end

  @impl true
  def handle_cast(msg, state) do
    IO.puts("handle_cast #{inspect(self())} #{msg}")
    {:noreply, state}
  end
#+END_SRC

Figure 5

The post Naming your Daemons appeared first on Erlang Solutions.

by Tee Teoh at May 02, 2024 13:46

April 25, 2024

Erlang Solutions

Technical debt and HR – what do they have in common?

At first glance, it may sound absurd. Here we have technical debt, a purely engineering problem, as technical as it can get, and another area, HR, dealing with psychology and emotions, put into one sentence. Is it possible that they are closely related? Let’s take it apart and see.

Exploring technical debt

What is technical debt, anyway? A tongue-in-cheek definition is that it is code written by someone else. But there is more to it – it is code written years ago, possibly by someone who has left the company. Also, every major piece of software is written incrementally over many years. Even if it started with a very detailed, well-thought-through design, there inevitably came plenty of modifications and additions which the original design could not easily accommodate.

Your predecessors sometimes had to cut corners and bend over backwards to achieve desired results in an acceptable amount of time. Then they moved on, someone else took over and so on.

What you now have is a tangled mess, mixing coding styles, techniques and design patterns, with an extra addition of ad-hoc solutions and hacks. You see a docstring like “temporary solution, TODO: clean up”, you run git blame and it is seven years old. It happens to everybody.

The business behind technical debt

Technical debt is a business issue. You can read about it in more detail in our previous post.

Source: Medium

The daily tasks of most developers are fixing bugs and implementing new features in existing code. The more messy and convoluted the code is, the more time it takes every time one has to read it and reason about it. And it is real money: according to McKinsey Report, this burden amounts to 20%-40% of an average business’ technology stack. Engineers are estimated to spend up to 50% of their time struggling with it.

So what can businesses do to get their code in check? Here are some suggestions:

  • Taking a step back 
  • Reassessing the architecture and techniques 
  • Making more informed choices 
  • Rewriting parts of the code to make it consistent and understandable, removing unused code and duplications

Unfortunately, this is very rarely done, since it does not bring any visible improvements to the product – clients are not interested in code quality, they want software that does its job. Improving the code costs real money, while the increase in developer productivity is impossible to quantify.

Technical debt also has another property – it is annoying. And this brings us nicely to the second topic.

Happy HR, Happier devs

What is HR about? In part, it is about the well-being of employees. Every employer wants good people to stay in the company. The most valuable employee is someone who likes their job and feels good about the place. HR departments go to great lengths to achieve this.

But, you can buy new chairs and phones, decorate the office, buy pizza, organise board games evenings – all this is mostly wasted if the following morning your devs show up in their workplace only to say “Oh no, not this old cruft again”, embellishing that statement with a substantial amount of profanities.

Now I tell you this: Nothing makes developers happier than allowing them to address their pain points. Ask them what they hate the most about the codebase and let them improve it, the way they choose to, at their own pace. They will be delighted.

You may ask how I know. Firstly, I’m a dev myself. Secondly, I’m fortunate enough to be currently working for a company that took the steps and did exactly that:

Step 1: Set up a small “tech debt” team

Step 2: Collected improvement proposals from all developers

Step 3: Documented them

Step 4: Defined priorities

Currently, the technical debt team or the proposers themselves are gradually putting these proposals into action, one by one. The code is getting better. We are becoming more productive. And if we’re happy, isn’t HR?

Calling upon the compassionate and proactive HR professionals out there: talk to your CTOs, tell them you all are after the same thing – you want these frustrated, burned-out coders happy, enthusiastic and more productive, and that you have an idea of how to achieve this.

Chances are they will be interested.

The post Technical debt and HR – what do they have in common? appeared first on Erlang Solutions.

by Bartek Gorny at April 25, 2024 09:17

ProcessOne

ejabberd Docs now using MkDocs

The ejabberd Docs website did just get a major rework: new content management system, reorganized navigation, improved markdown, and several improvements!

Brief documentation timeline

ejabberd started in November 2002 (see a timeline in the ejabberd turns 20 blog post). And the first documentation was published in January 2003, using LaTeX, see Ejabberd Installation and Operation Guide. That was one single file, hosted in the ejabberd CVS source code repository, and was available as a single HTML file and a PDF.

As the project grew and got more content, in 2015 the documentation was converted from LaTeX to Markdown, moved from ejabberd repository to a dedicated docs.ejabberd.im git repository, and published using a Go HTTP server in docs.ejabberd.im, see an archived ejabberd Docs site.

New ejabberd Docs site

Now the ejabberd documentation has moved to MkDocs+Material, and this brings several changes and improvements:

Site and Web Server:

  • Replaced Go site with MkDocs
  • Material theme for great features and visual appeal, including light/dark color schemes
  • Still written in Markdown, but now using several MkDocs, Material and Python-Markdown extensions
  • The online site is built by GitHub Actions and hosted in Pages, with smaller
    automatic deployment time
  • Offline reading: the ejabberd Docs site can be downloaded as a PDF or zipped HTML, see the links in home page

Navigation

  • Major navigation reorganization, keeping URLs intact so old links still work (only Install got some relevant URL changes)
  • Install section is split into several sections: Containers, Binaries, Compile, …
  • Reorganized the Archive section, and now it includes the corresponding Upgrade notes
  • Several markdown files from the ejabberd and docker-ejabberd repositories are now incorporated here

Content

  • Many markdown visual improvements, specially in code snippets
  • Options and commands that were modified in the last release will show a mark, see for example API Reference
  • Version annotations are shown after the corresponding title, see for example sql_flags
  • Modules can have version annotations, see for example mod_matrix_gw
  • Links to modules, options and API now use the real name with _ character instead of - (compare old #auth-opts with #auth_opts). The old links are still supported, no broken links.
  • Listen Modules section is now better organized
  • New experimental ejabberd Developer Livebook

So, please check the revamped ejabberd Docs site, and head to docs.ejabberd.im git repository to report problems and propose improvements.

The post ejabberd Docs now using MkDocs first appeared on ProcessOne.

by Badlop at April 25, 2024 08:54

April 21, 2024

Remko Tronçon

Generating coverage reports & badges for SwiftPM apps

The age plugin for Apple’s Secure Enclave is a small, platform-independent Swift CLI app, built using the Swift Package Manager. Because it does not use Xcode for building, you can’t use the Xcode IDE for collecting and browsing test coverage of your source files. I therefore wrote a small (self-contained, dependencyless, cross-platform) Swift script to transform SwiftPM’s raw coverage data into annotated source code, together with an SVG badge to put on your project page.

by Remko Tronçon at April 21, 2024 00:00

April 18, 2024

Erlang Solutions

Blockchain Tech Deep Dive| Meaning of Ownership

Welcome to part three of our ‘Making Sense of Blockchain’ blog post series. Here we’ll explore how our attitudes to ownership are changing and how this relates to the value we attach to digital assets in the blockchain space. You can check out ‘Innovating with Erlang and Elixir’ here if you missed part two of the series.

Digital Assets: Ownership in the era of Blockchain

While physical goods contain an abstract element: the design, the capacity to model, package and make it appealing to the owners or consumers. Digital assets have a far stronger element of abstraction which defines their value. In contrast, their physical element is often negligible and replaceable (e.g. software can be stored on disk, transferred or printed). These types of assets typically stimulate our intellect and imagination.

Digital goods have a unique quality in that they can be duplicated effortlessly and inexpensively. They can exist in multiple forms across various platforms, thanks to the simple way we store them using binary code. They can be recreated endlessly from identical copies. This is a feature that dramatically influences how we value digital assets.  Because replicas are so easy to make, their copies or representations don’t hold value, but the original digital creation itself. This principle is a cornerstone of blockchain technology, with its hash lock feature safeguarding the integrity of digital assets.

If digital items are used correctly, the capacity to clone a digital item can increase confidence that it will exist indefinitely, which keeps its value steady. However, the immutable and perpetual existence of digital goods isn’t guaranteed forever.

They are dependent on a physical medium (e.g. hard disk storage), that could be potentially altered, degraded or become obsolete over time. 

A blockchain, like the one used in the Bitcoin network, is a way to replicate and reinforce digital information via Distributed Ledger Technology (DLT). 

An example of the DLT network

Distributed Ledger Technology lets users record, share, and synchronise data and transactions across a distributed network comprising of many participants. 

It includes mechanisms to repair issues, should data become corrupted due to hard disk failure or a malicious attack.

However, as genetic evolution suggests, clones with the same characteristics can all die out by the introduction of an actor that makes the environment unfit for survival. So it might be sensible to introduce different types of ledgers to keep data safe on various physical platforms, increasing the likelihood of survival of information.

The evolution of services and their automation

Now let’s consider how we have started to attach value to services and how we are becoming increasingly demanding about their performance and quality.

Services are a type of abstract value often traded on the market. They involve actions defined by contracts that lead to some kind of change. This change can apply to physical goods, digital assets, and other services themselves or people. What we trade is the potential to exercise a transformation, which in some instances might have been applied already. For example, a refined product like oil that has already been changed from its original raw state.

As transformations become more automated and the human element decreases,  services are gradually taking the shape of automated algorithms, which are yet another form of digital assets. Take smart contracts, for example, a rapid-growth industry projected to grow from USD 1.9 Billion in 2023 to USD 9.2 Billion by 2032, according to Market Research Future.

Smart Contracts Market Projection Overview

But it’s important to state that an algorithm alone isn’t enough to apply digital transformation, we also require an executor, like a physical or virtual machine.

Sustainability and access to resources

Stimulation of the intellect and/or imagination isn’t the only motivator that explains the increasing interest in digital goods and ultimately their rising market value. Physical goods are known to be quite expensive to handle. To create, trade, own and preserve them, there is a significant expenditure required for storage, transport, insurance, maintenance, extraction of raw materials etc.

There’s a competitive and environmental cost involved, making obtaining physical resources inherently difficult to scale and sometimes costly- especially in densely populated urban areas. As a result, people are motivated to possess and exchange digital goods and services.

The high power consumption required by the Bitcoin network’s method of consensus would potentially negate these environmental benefits. Although power consumption is a concern, it should be remembered that blockchain technology can act as a force for good, being used for environmentally beneficial projects. 

A great example is the work being done by dClimate, a decentralised climate information ecosystem making it easier for businesses to find and utilise essential environmental information that could impact their sector. These important decisions in turn provide information on: 

  • Where businesses can build infrastructure
  • Where they can manage water resources
  • How businesses can protect vulnerable communities

However, some of these activities (such as those requiring non-physical effort, like stock market trading, and legal or accounting services) are best suited for significant cost reduction through algorithmic automation  (assuming that the high carbon footprint required to drive the ‘Proof of Work’ consensus mechanism used in many DLT ecosystems can be avoided).

Barriers to acceptance of digital assets

While it is sensible to forecast a significant expansion of the digital assets market in the coming years, it is also true that, at present, there are still many psychological barriers to overcome to get broader traction in the market.

The main challenge relates to trust. A buyer wants some assurance that traded assets are genuine and that the seller owns them or acts on behalf of the owner. DLT provides a solid way to work out the history of a registered item without, interrogating a centralised trusted entity. Provenance and ownership are inferable and verifiable from several replicated ledgers while block sequences can help ensure there is no double spending or double sale taking place within a certain time frame.

Another challenge is linked to the meaning of ownership outside of the context of a specific market. Take the closure of Microsoft’s ebook store. Microsoft’s decision to pull out of the ebook market, presumably motivated by a lack of profit, could have an impact on all ebook purchases that were made on that platform. The perception of the customer was obviously that owning an ebook was the same as owning a physical book. 

What Microsoft might have contractually agreed through its End-User License Agreement (EULA), however, is that this is true only within the contextual existence of its platform.

There is a push, in this sense, towards forms of ownership that can break out from the restrictions of a specific market and be maintained in a broader context. Blockchain’s DLT in conjunction with smart contracts, that exist potentially indefinitely, can be used to serve this purpose allowing people to effectively retain their digital items’ use across multiple applications.

The transition to these new notions of ownership is particularly demanding when it comes to digital non-fungible assets. Meanwhile, embracing fungible assets, such as cryptocurrency, has been somewhat easier for customers who are already used to relating to financial instruments. 

This is probably because fungible assets serve the unique function of paying for something, while in the case of non-fungible assets, there is a range of functions that define their meaning in the digital or physical space.

What this will mean for blockchain adopters

In discussing the major emerging innovation that blockchain technology has influenced dramatically over the last few years, the ownership of digital assets, it is clear that what we are witnessing is a new era that is likely to revolutionise the perception of ownership and reliance on trusted and trustless forms of automation. This is driven by the need to increase interoperability, cost compression, sustainability, performance and customisation. For any business size in any industry, we’re ready to investigate, build and deploy your blockchain-based project on time and to budget. Let us know about your blockchain project here.

The post Blockchain Tech Deep Dive| Meaning of Ownership appeared first on Erlang Solutions.

by Erlang Solutions Team at April 18, 2024 08:59

April 15, 2024

Monal IM

ROS Security Audit

Radically Open Security (ROS) kindly performed a security audit of some parts of Monal.
Specifically they audited the usage of our XML query language and the implementations of SASL2, SCRAM and SSDP.

The results in a nutshell: no security issues found, read the full report here: Monal IM penetration test report 2024 1.0 .

April 15, 2024 00:00

April 13, 2024

Snikket

Snikket Android app temporarily unavailable in Google Play store [RESOLVED]

We initially shared this news on our social media page, thinking this was a temporary issue. But we’ve had no response from Google for several days, and want to explain the situation in more detail.

Update 16th April: Over a week after this began, Google have reinstated the Snikket app on the Play Store and everything works again. Thanks to everyone who gave us encouragement and support during this time! Feel free to read on for details of what happened.

Summary

We merged some changes from our upstream project, Conversations, and we submitted the new version to Google for review. Before responding, they removed the existing published version from the store. We have submitted a new version (on 10th April) that we believe should satisfy Google, but they have not yet published it or provided any feedback.

This means that it’s not currently possible for Android users to install the app using Google Play. We recommend that you install it via F-Droid instead.

Workaround for Android users

If you receive an invitation to Snikket, the Play Store link in the invitation will not work. The best course of action is to install the app using an open-source marketplace instead: F-Droid.

  1. Follow the instructions on f-droid.org to download and install F-Droid.
  2. Install Snikket using F-Droid.
  3. After the Snikket app is installed, open your Snikket invitation link again.
  4. Tap the ‘Open the app’ button.
  5. Follow the Snikket app’s instructions to set up your new Snikket account.

The full story

I’m Matthew, founder of Snikket and lead developer. This is the story of how we arrived at this situation with Google.

It all began when…

A few months ago, Snikket, along with a number of other XMPP apps, found our updates rejected by Google’s review team, claiming that because we upload the address book entries of users to our servers, we need a “prominent disclosure” of this within the app. The problem is… we don’t upload the user’s address book anywhere!

The app requests permission to read the address book. Granting this permission is optional, and the reason is explained before the permission is requested. If you grant the permission, the app has a local-only (no upload!) feature that allows you to “link” your XMPP contacts with your phone address book contacts, allowing you to unify things like contact photos. Contact information from your address book is never uploaded.

Many messaging apps, such as WhatsApp, Signal, and others, request access to your address book so they can upload them to their servers and determine who else you know that is using their service. Google have decided that’s what we’re doing, and they won’t accept any evidence that we’re not.

We don’t have telemetry in our app, but we assumed that this feature is probably not used by most people, so we decided to remove it from the Play Store version of the app rather than continue fighting with Google.

Amusingly, Google also rejected the update that removed the ‘READ_CONTACTS’ permission. Multiple times. It took an appeal before they revealed that they were rejecting the new version it because one of the beta tracks still had an older version with the READ_CONTACTS permission. Weird.

I fixed that, and submitted again. They rejected it again. This time they said that they required a test login for the app. Funny, because we already provided one long ago. I assumed the old test account was no longer working, so I made them a new one and resubmitted the app. They rejected it again with the same reason - saying we had not provided valid test account credentials.

“You didn’t provide an active demo/guest account or a valid username and password which we need to access your app.” – Google reviewers

The weird thing was, when I logged in to that account to test it, I saw that they had logged in and even sent some messages. So they were lying?!

We submitted an appeal with all the evidence that the account was working, and their reviewers had even logged in and used it successfully. After some time, they eventually responded that they wanted a second test account. Why couldn’t they just say that in the first place?!

After adding credentials for a second account, and using the Snikket circles features to ensure they could find each other easily, we resubmitted.

Rejected again.

This time the rejection reason was really the best one so far: they claimed the app was unable to send or receive messages. Rather funny for a messaging app that thousands of people use to send and receive messages daily.

Wait, a messaging app that can’t send messages?

Screenshot of Google&rsquo;s response: Issue found: Message functionality. The message sending and/or receiving functionality on your app doesn&rsquo;t work as expected. For example: Your app is not able to send outgoing messages. Your app is not able to receive incoming messages.

Once again, I logged into the test account we had provided to Google, and once again saw that they had successfully exchanged messages between their two test accounts. We submitted another appeal, with evidence.

Eventually they responded, clarifying that their complaint was specifically about the app when used with Android Auto, their smart car integration. I do not have such a car, and couldn’t find any contributor who had, but I found that Google provide an emulator that can run on a PC, so I set that up on my laptop and proceeded to test.

You won’t be surprised to learn at this point that the messaging functionality worked fine. We responded to the appeal, including a screencast I made of the messaging functionality working with Android Auto. They informed us that they were “unable to assist with the implementation” of their policies. Then at the end of their response, suggested that if we think the app is compliant, that we should resubmit it for review.

So we resubmitted the app, which by this point had already been rejected 7 times. We resubmitted it with no modification at all. We resubmitted the version they rejected. They emailed us later that day to say it was live.

How would I rate the developer experience of publishing an app with Google Play? An unsurprising 1 star out of 5. If I could give zero, I would.

The removal

But this was all a couple of months ago. Everything was fine. Until I merged some of the nice things Daniel has been working on recently in Conversations, the app upon which Snikket Android is based. We put the new version out for beta testing and everything was going fine - the app passed review, and a few weeks later with no major issues reported, we pushed the button to promote the new version from beta to live on the store.

On the 8th April we received an email from Google with the subject line:

“Action Required: Your app is not compliant with Google Play Policies (Snikket)”

I was ill this day, and barely working. For reasons that, if you have read this far, you will hopefully understand, I decided to take up this fight when I was feeling better. Confusingly, a couple of days later we received another email with the same subject. At this point I realised with horror that the first email was not about the new update - they had reviewed the current published version and decided to remove it entirely from the store.

With Snikket unavailable, anyone trying to add a new Android user to their Snikket instance (whether hosted or self-hosted) is going to have a hard time. This is not good.

Their complaint was that the privacy policy was not prominent enough within the app. They had previously hit Conversations with the same thing. Daniel had already put a link to the privacy policy in the main menu of that app and this was already in the update waiting for their review. They didn’t reject the update until a couple of days later, and for a different reason.

Unknown to me, Daniel had tried to re-add the ‘READ_CONTACTS’ permission to Conversations, hoping that with the new privacy policy link and other disclaimers in place, that would be enough. They had already rejected that, and he had removed the permission again. But he did this after I had already started testing the new beta release of Snikket. The order of events went something like this:

  • Daniel experimentally re-adds READ_CONTACTS permission to Conversations
  • I merge Conversations changes into Snikket, and begin beta testing
  • Conversations update gets rejected due to the permission, and Daniel reverts the READ_CONTACTS change
  • Without knowing of the Conversations rejection, I promote the Snikket beta to the store.
  • Google rejects the Snikket update

What’s interesting is that Google rejected only on the permission change. The contacts integration itself was still disabled in Snikket. This is strong evidence that Google just assumes that if you have the permission (and presumably network permission too) then of course you must be uploading the user’s contacts somewhere.

As soon as I realised the problem, I merged the new changes from Conversations and rushed a new upload to Google Play. However at the time of writing this, several days later, Snikket remains unavailable in the store and no feedback has been received from Google.

This is an unsustainable situation

During this period we have had multiple people sign up for hosted Snikket instances, and then cancel shortly after. This is almost certainly because a vital step of the onboarding process (installing the app) is currently broken. This is providing a bad experience for our users and customers, negatively affecting the project’s reputation and income.

We are grateful that alternatives such as F-Droid exist, and allow people access to open-source apps via a transparent process and without the tyranny of Google and their faceless unaccountable review team. We need to ensure these projects are supported, and continue to improve their functionality, usability and user awareness.

Finally, we also welcome the efforts that the EU has been working on with things like the Digital Markets Act, to help break up the control that Google’s (demonstrably) arbitrary review process has over the the success and failure of projects, and the livelihoods of app developers.

Google, are you there?

Screenshot of Google Play dashboard: Release summary: &ldquo;in review&rdquo;

by Snikket Team (team@snikket.org) at April 13, 2024 11:00

April 11, 2024

Erlang Solutions

Blockchain Tech Deep Dive | Innovating with Erlang and Elixir

We’re back with the latest in our Blockchain series, where we explore in-depth In our first post, we explored the Six Key Principles of Blockchain

In our latest post, we’re making the case for using Erlang,Elixir and the BEAMVM to power your blockchain project.

Blockchain and business needs

Building a robust and scalable blockchain presents many challenges that a research and development team typically needs to address. The often ambitious goals to drive decentralised consensus and governance require unconventional approaches to achieve extra performance and reliability.

Improved Transactions per Second (TPS) is the most common challenge that blockchain-related use cases expose. TPS as the name suggests, is a metric that indicates how quickly a network can execute transactions per second. It is inherently difficult to produce a distributed peer-to-peer (P2P) network that can register transactions into a single data structure.

Guaranteeing consensus while delivering high TPS throughput among a vast number of nodes active on the network is even more challenging. Also, the fact that most public blockchains need to operate in a non-trusted mode requires adequate mechanisms for validation, which implies that contextual data needs to be available on demand. A blockchain should also be able to respond to anomalies such as network connectivity loss, node failure and malicious actors.

All of the above is further complicated by the continuous growth of the blockchain data structure itself, which becomes problematic in terms of storage.

It is clear that, unless businesses are prepared to invest vast amounts of resources, they would benefit from a high-level language and technology that allows them to quickly prototype and amend their code.

The ideal technology should also:

  • Offer a strong network and concurrent capabilities
  • Have technology built with distribution in mind 
  • Offer a friendly paradigm for asynchronous computation
  • Not collapse under heavy load
  • Deliver when traffic exceeds capacity

The Erlang Beam VM (available also in the Elixir syntax) undoubtedly scores high on the above list of desirable features.

Erlang & Elixir’s strengths for blockchain

Fast development

The challenge: Blockchain technology is present in extremely competitive markets. According to Grandview Marketing Analysis report, The global blockchain technology market size was valued at USD 17.46 billion in 2023 and is expected to grow at a compound annual growth rate (CAGR) of 87.7% from 2023 to 2030. 

Grandview Marketing Analysis report

It is critical for organisations operating in them to be able to release new features in time to attract new customers and retain existing ones.

The answer: Both Erlang and Elixir are functional languages, operating at a very high level of abstraction which is ideal for fast prototyping and development of new ideas. By using these languages on top of the Beam VM, developers dramatically increase the speed to market when compared to other lower-level or object-oriented technologies.

Solutions developed in Erlang or Elixir also lead to a significantly smaller code base, which is easier to test and adapt to changes of direction. This is helpful when you proceed to fast prototyping new solutions and when you discover that amendments and upgrades are necessary, which is very typical in blockchain R&D activity. Both languages offer support for unit testing in their standard library. This enables developers to adopt Test Driven approaches ensuring the quality is preserved when modules and libraries get refactored. The common test framework also provides support for distributed tests and can be integrated with Continuous Integration frameworks like Jenkins. Both Erlang and Elixir shells let the programmer flesh out ideas fast and access running nodes for inspection.

Introspection

The challenge: To keep a competitive advantage in a fast-changing market, it is critical for organisations to promptly identify issues and opportunities by extracting relevant information about their running systems so that actions can be taken to upgrade them where appropriate.

The answer: Erlang and Elixir allow connection to an already running system and a status check. This is an extremely useful debugging tool, both in the development and production environment. Statuses of processes can be checked, and deadlocks in the live system can be analysed. Some various metrics and tools can show overload, bottlenecks and other key performance indicators. Enhanced introspection tools such as Erlang Solutions’ Wombat OAM are also helping with the identification of scalability issues when you run performance tests.

Networking

The challenge: Delivering a highly scalable and distributed P2P network is critical for blockchain enterprises. It is important to rely on stable battle-proven network libraries as reliable building blocks for exploiting use case-specific innovative approaches.

The answer: Erlang and Elixir come with strong and easy-to-manage network capabilities. There is a proven record of successful enterprises that rely on the BEAM VM’s networking strengths; including Adroll, WhatsApp, Bleacher Report, Klarna, Bet365 and Ericsson. Most of their use cases have strong analogies with the P2P networking that is required to deliver a distributed blockchain.

Combined with massive concurrency, the networking makes Erlang and Elixir ideal for server applications and means it can handle many clients. The binary and bitstring syntax makes parsing binary protocols particularly easy.

Massively concurrent

The challenge: There is a weakness afflicting Bitcoin and Ethereum where the computation of a block is competitive rather than collaborative. There is the opportunity to drive a collaborative concurrent approach: i.e. via sharding so that each actor can compute a portion of a block.

The answer: The BEAM VM powering Erlang and Elixir provides lightweight processes for applications. These are lightweight so that hundreds of thousands of them can run simultaneously. These processes do not share memory, communication is done over asynchronous messages (unlike goroutines) so there’s no need to synchronise them. The BEAM VM implementation also makes use of all of the available CPUs and cores. This makes Erlang and Elixir ideal for workloads that involve a huge amount of concurrency and consist of mostly independent workflows. This feature is especially useful in addressing the coordinated distribution of portions of work to compute a Merkle Tree of transactions.

High availability and resilience

The challenge: These are the requirements for every type of application and even more so for competitive and highly distributed blockchain networks. The communication and preservation of a state need to be as available as possible to avoid inconsistent states, network forks and disruptions experienced by the users.

The answer: The fault tolerance properties mentioned in the previous paragraph combined with built-in distribution leads to high availability even in cases of hardware malfunction. Erlang and Elixir have the built-in mnesia database system with the ability to replicate data over a cluster of nodes so if one node goes down, the state of the system is not lost.

Erlang and Elixir provide the supervisor pattern to handle errors.

An example of a Supervision Tree, used to build a hierarchical process structure 

Computing is done in multiple processes and if an error occurs and a process crashes, the supervisor is there to handle the situation, restart the computing or take some other measures. This pattern lets the actual code be cleaned as error handling can be implemented elsewhere. As processes are isolated, they do not share memory meaning errors are localised.

Built-in distribution

The challenge: This is highly relevant for trusted or hybrid networks where a central network takes authoritative decisions on top of a broader P2P network. Using the “out of the box” Erlang distribution and proven consistency approaches such as RAFT can be a quick win towards a fast prototype of a blockchain solution.

The answer: Erlang and Elixir provide out-of-the-box tools to run a distributed system. The message-handling functionalities of the system are transparent, so sending a message to a remote process is just as simple as sending it to a local process. There’s no need for convoluted IDLs, naming services or brokers.

Safety and Security

The challenge: Among the security features that both trusted and untrusted blockchain solutions strongly require, is the critical protection of access to the state memory, therefore reducing the exposure to a range of vulnerabilities.

The answer: Erlang and Elixir, just like many high-level languages, do not manipulate memory directly so applications written in Erlang and Elixir are effectively immune to many vulnerabilities like buffer overflows and return-oriented programming. Exploiting these vulnerabilities would require direct access to the memory, but the BEAM VM hides the memory from the application code.

While many business leaders are still trying to figure out how to put the technology to work for maximum ROI, most agree on two things:

  1. Blockchain unlocks vast value potential from optimised business operations.
  2. It’s here to stay.

Unlocking the potential of technology

Talking about blockchain implementation is no longer merely food for thought. Organisations should keep an eye on developments in blockchain tech and start planning how to best use this transformative technology, to unleash trapped value in key operational processes.

It’s clear – blockchain should be on every company’s agenda, regardless of industry.

If you want to start a conversation about engaging us for your project. Just drop the team a line.

The post Blockchain Tech Deep Dive | Innovating with Erlang and Elixir appeared first on Erlang Solutions.

by Erlang Solutions Team at April 11, 2024 09:10

April 07, 2024

The XMPP Standards Foundation

The XMPP Newsletter March 2024

XMPP Newsletter Banner

XMPP Newsletter Banner

The 50th release of the XMPP Newsletter!

This is the 50th release of the XMPP Newsletter since it started in February 2019. We think it is worth to celebrate this achievement and say thanks to all the contributors as well as all our readers! Back at the Summit in Brussels, JC Brand, Nicolas Vérité (Nyco) and Severino Ferrer (S0ul) proposed and initiated the XMPP Newsletter. Since then almost every month there has been a release full of news from the XMPP universe. Hence, we are looking forward to the next releases and invite you to participate in this community effort! We would love to see more contributors as well as more translations of the XMPP Newsletter. Read more about how to help below.

That being said, welcome to the 50th edition of the XMPP Newsletter, great to have you here again! This issue covers the month of March 2024.

XSF Announcements

Welcome to our reapplicants and new members in Q1 2024! If you are interested to join the XMPP Standards Foundation as member, please apply now.

XMPP and Google Summer of Code 2024

The XSF has been accepted as a hosting organisation at GSoC in 2024 again! Currently XMPP project mentors are reviewing the proposals. GSoC project ideas from XMPP-related projects are:

XSF and Google Summer of Code 2024

XSF and Google Summer of Code 2024

XSF Fiscal Hosting Projects

The XSF offers fiscal hosting for XMPP projects. Please apply via Open Collective. For more information, see the announcement blog post. Current projects you can support:

XMPP Events

  • XMPP Italian happy hour [IT]: monthly Italian XMPP web meeting, every third Monday of the month at 7:00 PM local time (online event, with web meeting mode and live streaming).

Talks

  • Last November the Linux Inlaws Podcast hosted Edward and Matt from the XSF Board 2024. The episode has been published recently and can be accessed via archive.org.

Articles

Axel Reimer published an XMPP app decision table. It compares typical instant messaging features of popular XMPP apps and might be a guideline for new users who want to try an XMPP app. The table is available in English and German.

You can now ask all your questions about XMPP Providers in the new support chat. That chat is hosted on an own XMPP server which is set up with the new XMPP Providers Server project. It is a simple, Ansible-based all-in-one server setup that can be used for your own XMPP server as well.

Snikket Hosting is now publicly available! The launch is about providing new ways to get started with Snikket, not replacing the options that are already available. If you are already self-hosting Snikket, or planning to, nothing is changing for you. Though please do donate to support the project, even a little helps!

European Union news:

Software News

Clients and Applications

Servers

Libraries & Tools

  • A new release of go-xmpp 0.1.4.
  • A new release of go-sendxmpp 0.9.0.
  • overpush: A self-hosted, drop-in replacement for Pushover, that uses XMPP as delivery method. It offers the same API for submitting messages, so that existing setups (e.g. Grafana) can continue working and only require changing the API URL. Release article.

Extensions and specifications

The XMPP Standards Foundation develops extensions to XMPP in its XEP series in addition to XMPP RFCs.

Developers and other standards experts from around the world collaborate on these extensions, developing new specifications for emerging practices, and refining existing ways of doing things. Proposed by anybody, the particularly successful ones end up as Final or Active - depending on their type - while others are carefully archived as Deferred. This life cycle is described in XEP-0001, which contains the formal and canonical definitions for the types, states, and processes. Read more about the standards process. Communication around Standards and Extensions happens in the Standards Mailing List (online archive).

Proposed

The XEP development process starts by writing up an idea and submitting it to the XMPP Editor. Within two weeks, the Council decides whether to accept this proposal as an Experimental XEP.

New

  • Version 0.1.0 of XEP-0485 (PubSub Server Information)
    • Promoted to Experimental. (dg)
  • Version 0.1.0 of XEP-0486 (MUC Avatars)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0487 (Host Meta 2 - One Method To Rule Them All)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0488 (MUC Token Invite)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0489 (Reporting Account Affiliations)
    • Promoted to Experimental (XEP Editor: dg)
  • Version 0.1.0 of XEP-0490 (Message Displayed Synchronization)
    • Promoted to Experimental (XEP Editor: dg)

Deferred

If an experimental XEP is not updated for more than twelve months, it will be moved off Experimental to Deferred. If there is another update, it will put the XEP back onto Experimental.

  • No XEPs deferred this month.

Updated

  • Version 1.25.0 of XEP-0001 (XMPP Extension Protocols)
    • Add note that editorial changes do not affect Deferred state (XEP Editor: dg)
  • Version 1.26.0 of XEP-0060 (Publish-Subscribe)
    • Add examples for publishing item without ID (melvo)
  • Version 1.1.1 of XEP-0313 (Message Archive Management)
    • Add XEP-0136 to superseded specifications (gdk)
  • Version 0.5.0 and 0.6.0 of XEP-0333 (Displayed Markers (was: Chat Markers))
    • Remove <received/> to not replicate functionality.
    • Remove <acknowledged/> because it was not implemented in the last 10 years and apparently is not needed.
    • Remove Disco feature. Opting in via <markable/> is enough (dg)
    • Add Business Rule about opportunistic Displayed Markers in 1:1 chats (dg)
  • Version 0.5.0 of XEP-0334 (Message Processing Hints)
    • Incorporate last call feedback from 2017.
    • Differences between this specification and XEP-0079 have been clarified.
    • A note about handling of hints found in error stanzas has been added. (mw)
  • Version 0.4.1 of XEP-0388 (Extensible SASL Profile)
    • Add missing elements to XML Schema
    • Add missing XMPP Registrar Considerations (dg)
  • Version 0.3.0 of XEP-0398 (User Avatar to vCard-Based Avatars Conversion)
    • Add text to explain that both and are valid implementations.
    • Add Security Considerations for both variants (dg)
  • Version 0.4.1 of XEP-0424 (Message Retraction)
    • Fix schema.
    • Add missing for attribute in fallback element (Example 4). (nc)
  • Version of XEP-0425 (Moderated Message Retraction)
    • Remove the dependency on XEP-0422 Message Fastening
    • Rename to ‘Moderated Message Retraction’ and focus only on the retraction use-case
    • Ensure compatibility with clients that only implement XEP-0424
    • Clarify the purpose of the <reason/> element
  • Version 0.3.0 and 0.3.1 of XEP-0447 (Stateless file sharing)
    • Describe how to use for multiple files, with body text, without any source in original message and compatibility with various current deployed protocols. (lmw)
    • Fix example for multiple files. (lmw)

Last Call

Last calls are issued once everyone seems satisfied with the current XEP status. After the Council decides whether the XEP seems ready, the XMPP Editor issues a Last Call for comments. The feedback gathered during the Last Call can help improve the XEP before returning it to the Council for advancement to Stable.

  • XEP-0333: Displayed Markers (was: Chat Markers)
  • XEP-0334: Message Processing Hints
  • XEP-0360: Nonzas (are not Stanzas)
  • XEP-0386: Bind 2
  • XEP-0388: Extensible SASL Profile
  • XEP-0392: Consistent Color Generation

Stable

  • Version 1.0.0 of XEP-0392 (Consistent Color Generation)
    • Accept as Stable as per Council Vote from 2024-03-27. (XEP Editor (dg))

Deprecated

  • No XEP deprecated this month.

Rejected

  • XEP-0360: Nonzas (are not Stanzas)

Spread the news

Please share the news on other networks:

Subscribe to the monthly XMPP newsletter
Subscribe

Also check out our RSS Feed!

Looking for job offers or want to hire a professional consultant for your XMPP project? Visit our XMPP job board.

Newsletter Contributors & Translations

This is a community effort, and we would like to thank translators for their contributions. Volunteers are welcome! Translations of the XMPP Newsletter will be released here (with some delay):

  • English (original): xmpp.org
    • General contributors: Adrien Bourmault (neox), Alexander “PapaTutuWawa”, Arne, cal0pteryx, emus, Federico, Jonas Stein, Kris “poVoq”, Licaon_Kter, Ludovic Bocquet, Mario Sabatino, melvo, MSavoritias (fae,ve), nicola, Simone Canaletti, XSF iTeam
  • French: jabberfr.org and linuxfr.org
    • Translators: Adrien Bourmault (neox), alkino, anubis, Arkem, Benoît Sibaud, mathieui, nyco, Pierre Jarillon, Ppjet6, Ysabeau
  • Italian: notes.nicfab.eu
    • Translators: nicola

Help us to build the newsletter

This XMPP Newsletter is produced collaboratively by the XMPP community. Each month’s newsletter issue is drafted in this simple pad. At the end of each month, the pad’s content is merged into the XSF Github repository. We are always happy to welcome contributors. Do not hesitate to join the discussion in our Comm-Team group chat (MUC) and thereby help us sustain this as a community effort. You have a project and want to spread the news? Please consider sharing your news or events here, and promote it to a large audience.

Tasks we do on a regular basis:

  • gathering news in the XMPP universe
  • short summaries of news and events
  • summary of the monthly communication on extensions (XEPs)
  • review of the newsletter draft
  • preparation of media images
  • translations
  • communication via media accounts

License

This newsletter is published under CC BY-SA license.

April 07, 2024 00:00

April 04, 2024

Ignite Realtime Blog

Smack 4.4.8 released

We are happy to announce the release of Smack 4.4.8, our XMPP-client library for JVMs and Android. For a high-level overview of what’s changed in Smack 4.4.8, check out Smack’s changelog

Smack 4.4.8 contains mostly small fixes. However, we fixed one nasty bug in Smack’s reactor causing an, potentially endless, busy loop. Smack’s new connection infrastrucutre makes heavy use of the reactor, that enables tausands of connections being served by only a handful of threads.

As always, this Smack patchlevel release is API compatible within the same major-minor version series (4.4) and all Smack releases are available via Maven Central.

We would like to use this occasion to point at that Smack now ships with a NOTICE file. Please note that this adds some requirements when using Smack as per the Apache License 2.0. The content of Smack’s NOTICE file can conveniently be retrieved using Smack.getNoticeStream().

1 post - 1 participant

Read full topic

by Flow at April 04, 2024 17:04

Erlang Solutions

A Guide to RabbitMQ

Looking to learn more about the basics of RabbitMQ? This powerful message broker plays a key role in modern, distributed systems. 

This post will break down its fundamentals and highlight its importance in the world of modern, distributed systems.

An introduction to RabbitMQ

RabbitMQ emerged from the need to create a scalable, robust messaging system that was able to handle high volumes of communications between applications, all while maintaining both data and performance. 

It is now a popular open-source messaging broker, with queue software written in Erlang. One of its key strengths is its ability to support and adhere to Application Programming Interface (API) protocols, for example, AMQP, HTTP AND STOMP. 

What are APIs you ask? 

They define the rules and conventions that allow for the interaction and communication of different software. For developers, APIs are the go-between that allows them to access a software or services functionality, without the need for a full understanding of the ins and outs of that particular system. 

In turn, these protocols offer a standard method of transmitting commands and data. The result? Seamless integration and interoperability between different systems.

Let’s circle back to one previously mentioned protocol, the Advanced Message Queuing Protocol or AMQP. This protocol was made to ensure that messages are reliably delivered between applications, no matter where the platform it is running on is located. AMQP has precise rules for the delivery, formatting and confirmation of messages. This ensures that every message sent through an AMQP-based system, like RabbitMQ, reaches its intended location.

Here’s an illustration better explaining the AMQP system:

Source: The Heart of RabbitMQ

What is RabbitMQ used for?

Developers use RabbitMQ to efficiently process high-throughput and reliable background jobs and facilitate the integration and communication between applications. It is also great at managing complex routing to consumers by integrating various applications and services.

RabbitMQ is also a great solution for web servers that require a rapid-request response. It also effectively distributes workloads between workers, handling over 20,000 messages per second. It can manage background jobs and longer-running tasks, for example, PDF conversion and file scanning.

How does RabbitMQ work?

Think of RabbitMQ as a middleman. It collects messages from a producer (publisher) and passes them on to receivers (consumers). Using a messaging queue, it then holds messages until the consumers can process them. 

Here’s a better overview of these core systems:

Producer (publisher)It sends messages to a queue for processing by consumers.
QueueWhere messages are transferred and stored until they can be processed.
Consumer (receiver)It receives messages from queues and uses them for other defined tasks.
ExchangeThe entry point for the messaging broker. It uses routing rules to determine which queues should receive the message.
BrokerA messaging system that stores produced data. Another application can connect to it using specific details, like parameters or connection strings, to receive and use that data.
ChannelChannels offer a lightweight connection to a broker via a shared Transmission Control Protocol (TCP) connection.

Source: RabbitMQ tutorials

Key features of RabbitMQ

As one of the most powerful and flexible messaging systems, RabbitMQ offers several key features, including:

Security: Various security features in RabbitMQ are designed to protect systems from unauthorised access and potential data breaches. With authentication and authorisation support, administrators can control which users or applications have access to certain queues or exchanges. It also supports SSL/TLS encryption, to ensure clear communication between brokers and clients.

Reliability: Reliable message delivery by supporting features, such as message acknowledgement and persistent message storage.

Scalable and fault-tolerant: RabbitMQ provides features for building scalable and fault-tolerant messaging systems. It also supports clustering, whereby adding more nodes to the cluster allows the system to handle higher message volumes. It’s then able to distribute the workload across multiple nodes, making for efficient utilisation of resources. In the case of a node failure, other nodes in the cluster can continue to handle messages without interruption.

Extended features: RabbitMQ is not limited to the AMQP protocol, but is very versatile and can support a host of others, such as MQTT and STOMP.

Enterprise and the Cloud: RabbitMQ is lightweight and easy to deploy on the public as well as private clouds using pluggable authentication authorisation.

Tools and Plugins: RabbitMQ offers a host of tools and plugins, ideal for integration and wider support.

Common use cases for RabbitMQ

We’ve already highlighted the versatility of RabbitMQ in modern distributed systems. With its robust features and flexible architecture, here are some most common use cases:

Legacy applications: RabbitMQ integrates with legacy systems by using available or custom plugins. You can connect consumer apps to legacy apps for example, connecting JMS apps using the Java Message Service (JMS) plug-in and JMS client library. 

Distributed systems: RabbitMQ serves as a messaging infrastructure in distributed systems. It fosters asynchronous communication between different components, facilitating the scalability and decoupling of the system.

IoT applications: When used in Internet of Things (IoT) applications, RabbitMQ can handle the exchange of messages between devices and backend systems, allowing for reliable and efficient communication, control and real-time monitoring of IoT devices.

Chat applications: For real-time communication in chat applications, RabbitMQ manages messaging exchanges between users, facilitating group chat and instant messaging functionalities. 

Task/job queues: RabbitMQ manages task queues and distributes work across multiple workers. This means that tasks are processed efficiently and at scale, reducing bottlenecks and utilising resources. 

Event-driven architectures: RabbitMQ is great for carrying out event-driven architectures.

It allows various system components to respond to events and seamlessly interact with each other. 
Microservices communication: A common use of RabbitMQ is enabling asynchronous and reliable communication between microservices. Messages are delivered, even if some services are unavailable.

To conclude 

As businesses seek to adopt distributed architectures and microservices-based applications, RabbitMQ remains a go-to choice for improved adaptability and seamless integration across systems. If you’d like to discuss how RabbitMQ can improve your applications, get in touch with the Erlang Solutions team.

The post A Guide to RabbitMQ appeared first on Erlang Solutions.

by Erlang Solutions Team at April 04, 2024 12:20

Isode

Harrier 4.0 – New Capabilities

Harrier is our Military Messaging client. It provides a modern, secure web UI that supports SMTP, STANAG 4406 and ACP 127. Harrier allows authorised users to access role-based mailboxes and respond as a role within an organisation rather than as an individual.

You can find out more about Harrier here.

Server Administration and Monitoring

Harrier 4.0 adds a Web interface for server administrators to configure Harrier. Key points:

  • Secure bootstrap
  • Sensible defaulting of parameters to facilitate startup
  • Per domain and global configuration options
  • Security features, including TLS, HSM, S/MIME and Security Labels/Security Policy
  • Full configuration of all Harrier options and capabilities

In addition to configuration, the Web user interface provides a monitoring capability to show server activity and key operational parameters.

UI Enhancements

A number of improvements made to the Harrier UI including:

  • Variable size compose windows, retaining user preferences and stacking multiple windows
  • HTML Message editing:
    • Font bold/italic/underline/colour
    • Lists and Bullets
    • Reply to HTML messages
  • Undo and redo in message editor
  • Organizations in from selection has configurable default and alphabetic sort.
  • Active role shown on browser tab. Facilitates working with multiple roles in different tabs.
  • Extended message search capabilities to include:
    • Filter by precedence
    • Free text search in choice of: body; subject; SIC; action; info; from

Security Enhancements

The following security enhancements added:

  • Per domain S/MIME signing policy (never/if possible/always). Model is administrator choice rather than user selection.
  • Policy control of using S/MIME header signing.
  • Policy choice to alert users to unsigned messages.
  • Policy choice to allow encryption.
  • Policy choice of encryption by enveloping or triple wrap.
  • Message Decrypt on initial access. The primary goal of S/MIME encryption is end to end protection. Some clients leave messages encrypted, which can lead to problems over time if keys become unavailable or are changed. Decryption prevents these issues. Note that for triple wrap, the inner signature is retained.

Other Enhancements

  • Server option to force user confirmation of message send (audit logged). Important in some scenarios to confirm message responsibility.
  • Option to configure multiple address books in different directories.
  • Revalidation of recipients before message release.
  • Timezone option to be Zulu or Local.

by admin at April 04, 2024 11:33

April 01, 2024

ProcessOne

ejabberd 24.02

🚀 Introducing ejabberd 24.02: A Huge Release!

ejabberd 24.02 has just been release and well, this is a huge release with 200 commits and more in the libraries. We’ve packed this update with a plethora of new features, significant improvements, and essential bug fixes, all designed to supercharge your messaging infrastructure.


🌐 Matrix Federation Unleashed: Imagine seamlessly connecting with Matrix servers – it’s now possible! ejabberd breaks new ground in cross-platform communication, fostering a more interconnected messaging universe. We have still some ground to cover and for that we are waiting for your feedback.
🔐 Cutting-Edge Security with TLS 1.3 & SASL2: In an era where security is paramount, ejabberd steps up its game. With support for TLS 1.3 and advanced SASL2 protocols, we increase the overall security for all platform users.
🚀 Performance Enhancements with Bind 2: Faster connection times, especially crucial for mobile network users, thanks to Bind 2 and other performance optimizations.
🔄 User gains better control over on their messages: The new support for XEP-0424: Message Retraction allows users to manage their message history and remove something they posted by mistake.
🔧 Optimized server pings by relying on an existing mechanism coming from XEP-0198
📈 Streamlined API Versioning: Our refined API versioning means smoother, more flexible integration for your applications.
🧩 Enhanced Elixir, Mix and Rebar3 Support

If you upgrade ejabberd from a previous release, please review those changes:

A more detailed explanation of those topics and other features:

Matrix federation

ejabberd is now able to federate with Matrix servers. Detailed instructions to setup Matrix federation with ejabberd will be detailed in another post.

Here is a quick summary of the configuration steps:

First, s2s must be enabled on ejabberd. Then define a listener that uses mod_matrix_gw:

listen:
  -
    port: 8448
    module: ejabberd_http
    tls: true
    certfile: "/opt/ejabberd/conf/server.pem"
    request_handlers:
      "/_matrix": mod_matrix_gw

And add mod_matrix_gw in your modules:

modules:
  mod_matrix_gw:
    matrix_domain: "domain.com"
    key_name: "somename"
    key: "yourkeyinbase64"

Support TLS 1.3, Bind 2, SASL2

Support for XEP-0424 Message Retraction

With the new support for XEP-0424: Message Retraction, users of MAM message archiving can control their message archiving, with the ability to ask for deletion.

Support for XEP-0198 pings

If stream management is enabled, let mod_ping trigger XEP-0198 <r/>equests rather than sending XEP-0199 pings. This avoids the overhead of the ping IQ stanzas, which, if stream management is enabled, are accompanied by XEP-0198 elements anyway.

Update the SQL schema

The table archive has a text column named origin_id (see commit 975681). You have two methods to update the SQL schema of your existing database:

If using MySQL or PosgreSQL, you can enable the option update_sql_schema and ejabberd will take care to update the SQL schema when needed: add in your ejabberd configuration file the line update_sql_schema: true

If you are using other database, or prefer to update manually the SQL schema:

  • MySQL default schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_username_origin_id USING BTREE ON archive(username(191), origin_id(191));
  • MySQL new schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_sh_username_origin_id USING BTREE ON archive(server_host(191), username(191), origin_id(191));
  • PostgreSQL default schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_username_origin_id ON archive USING btree (username, origin_id);
  • PostgreSQL new schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
ALTER TABLE archive ALTER COLUMN origin_id DROP DEFAULT;
CREATE INDEX i_archive_sh_username_origin_id ON archive USING btree (server_host, username, origin_id);
  • MSSQL default schema:
ALTER TABLE [dbo].[archive] ADD [origin_id] VARCHAR (250) NOT NULL;
CREATE INDEX [archive_username_origin_id] ON [archive] (username, origin_id)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
  • MSSQL new schema:
ALTER TABLE [dbo].[archive] ADD [origin_id] VARCHAR (250) NOT NULL;
CREATE INDEX [archive_sh_username_origin_id] ON [archive] (server_host, username, origin_id)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);
  • SQLite default schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
CREATE INDEX i_archive_username_origin_id ON archive (username, origin_id);
  • SQLite new schema:
ALTER TABLE archive ADD COLUMN origin_id text NOT NULL DEFAULT '';
CREATE INDEX i_archive_sh_username_origin_id ON archive (server_host, username, origin_id);

Authentication workaround for Converse.js and Strophe.js

This ejabberd release includes support for XEP-0474: SASL SCRAM Downgrade Protection, and some clients may not support it correctly yet.

If you are using Converse.js 10.1.6 or older, Movim 0.23 Kojima or older, or any other client based in Strophe.js v1.6.2 or older, you may notice that they cannot authenticate correctly to ejabberd.

To solve that problem, either update to newer versions of those programs (if they exist), or you can enable temporarily the option disable_sasl_scram_downgrade_protection in the ejabberd configuration file ejabberd.yml like this:

disable_sasl_scram_downgrade_protection: true

Support for API versioning

Until now, when a new ejabberd release changed some API command (an argument renamed, a result in a different format…), then you had to update your API client to the new API at the same time that you updated ejabberd.

Now the ejabberd API commands can have different versions, by default the most recent one is used, and the API client can specify the API version it supports.

In fact, this feature was implemented seven years ago, included in ejabberd 16.04, documented in ejabberd Docs: API Versioning… but it was never actually used!

This ejabberd release includes many fixes to get API versioning up to date, and it starts being used by several commands.

Let’s say that ejabberd 23.10 implemented API version 0, and this ejabberd 24.02 adds API version 1. You may want to update your API client to use the new API version 1… or you can continue using API version 0 and delay API update a few weeks or months.

To continue using API version 0:
– if using ejabberdctl, use the switch --version 0. For example: ejabberdctl --version 0 get_roster admin localhost
– if using mod_http_api, in ejabberd configuration file add v0 to the request_handlers path. For example: /api/v0: mod_http_api

Check the details in ejabberd Docs: API Versioning.

ejabberd commands API version 1

When you want to update your API client to support ejabberd API version 1, those are the changes to take into account:
– Commands with list arguments
– mod_http_api does not name integer and string results
– ejabberdctl with list arguments
– ejabberdctl list results

All those changes are described in the next sections.

Commands with list arguments

Several commands now use list argument instead of a string with separators (different commands used different separators: ; : \\n ,).

The commands improved in API version 1:
add_rosteritem
oauth_issue_token
send_direct_invitation
srg_create
subscribe_room
subscribe_room_many

For example, srg_create in API version 0 took as arguments:

{"group": "group3",
 "host": "myserver.com",
 "label": "Group3",
 "description": "Third group",
 "display": "group1\\ngroup2"}

now in API version 1 the command expects as arguments:

{"group": "group3",
 "host": "myserver.com",
 "label": "Group3",
 "description": "Third group",
 "display": ["group1", "group2"]}

mod_http_api not named results

There was an incoherence in mod_http_api results when they were integer/string and when they were list/tuple/rescode…: the result contained the name, for example:

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel/v0"
{"levelatom":"info"}

Staring in API version 1, when result is an integer or a string, it will not contain the result name. This is now coherent with the other result formats (list, tuple, …) which don’t contain the result name either.

Some examples with API version 0 and API version 1:

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel/v0"
{"levelatom":"info"}

$ curl -k -X POST -H "Content-type: application/json" -d '{}' "http://localhost:5280/api/get_loglevel"
"info"

$ curl -k -X POST -H "Content-type: application/json" -d '{"name": "registeredusers"}' "http://localhost:5280/api/stats/v0"
{"stat":2}

$ curl -k -X POST -H "Content-type: application/json" -d '{"name": "registeredusers"}' "http://localhost:5280/api/stats"
2

$ curl -k -X POST -H "Content-type: application/json" -d '{"host": "localhost"}' "http://localhost:5280/api/registered_users/v0"
["admin","user1"]

$ curl -k -X POST -H "Content-type: application/json" -d '{"host": "localhost"}' "http://localhost:5280/api/registered_users"
["admin","user1"]

ejabberdctl with list arguments

ejabberdctl now supports list and tuple arguments, like mod_http_api and ejabberd_xmlrpc. This allows ejabberdctl to execute all the existing commands, even some that were impossible until now like create_room_with_opts and set_vcard2_multi.

List elements are separated with , and tuple elements are separated with :.

Relevant commands:
add_rosteritem
create_room_with_opts
oauth_issue_token
send_direct_invitation
set_vcard2_multi
srg_create
subscribe_room
subscribe_room_many

Some example uses:

ejabberdctl add_rosteritem user1 localhost testuser7 localhost NickUser77l gr1,gr2,gr3 both
ejabberdctl create_room_with_opts room1 conference.localhost localhost public:false,persistent:true
ejabberdctl subscribe_room_many user1@localhost:User1,admin@localhost:Admin room1@conference.localhost urn:xmpp:mucsub:nodes:messages,u

ejabberdctl list results

Until now, ejabberdctl returned list elements separated with ;. Now in API version 1 list elements are separated with ,.

For example, in ejabberd 23.10:

$ ejabberdctl get_roster admin localhost
jan@localhost jan   none    subscribe       group1;group2
tom@localhost tom   none    subscribe       group3

Since this ejabberd release, using API version 1:

$ ejabberdctl get_roster admin localhost
jan@localhost jan   none    subscribe       group1,group2
tom@localhost tom   none    subscribe       group3

it is still possible to get the results in the old syntax, using API version 0:

$ ejabberdctl --version 0 get_roster admin localhost
jan@localhost jan   none    subscribe       group1;group2
tom@localhost tom   none    subscribe       group3

ejabberdctl help improved

ejabberd supports around 200 administrative commands, and probably you consult them in the ejabberd Docs -> API Reference page, where all the commands documentation is perfectly displayed…

The ejabberdctl command-line script already allowed to consult the commands documentation, consulting in real-time your ejabberd server to show you exactly the commands that are available. But it lacked some details about the commands. That has been improved, and now ejabberdctl shows all the information, including arguments description, examples and version notes.

For example, the connected_users_vhost command documentation as seen in the ejabberd Docs site is equivalently visible using ejabberdctl:

$ ejabberdctl help connected_users_vhost
  Command Name: connected_users_vhost

  Arguments: host::binary : Server name

  Result: connected_users_vhost::[ sessions::string ]

  Example: ejabberdctl connected_users_vhost "myexample.com"
           user1@myserver.com/tka
           user2@localhost/tka

  Tags: session

  Module: mod_admin_extra

  Description: Get the list of established sessions in a vhost

Experimental support for Erlang/OTP 27

Erlang/OTP 27.0-rc1 was recently released, and ejabberd can be compiled with it. If you are developing or experimenting with ejabberd, it would be great if you can use Erlang/OTP 27 and report any problems you find. For production servers, it’s recommended to stick with Erlang/OTP 26.2 or any previous version.

In this sense, the rebar and rebar3 binaries included with ejabberd are also updated: now they support from Erlang 24 to Erlang 27. If you want to use older Erlang versions from 20 to 23, there are compatible binaries available in git: rebar from ejabberd 21.12 and rebar3 from ejabberd 21.12.

Of course, if you have rebar or rebar3 already installed in your system, it’s preferable if you use those ones, because probably they will be perfectly compatible with whatever erlang version you have installed.

Installers and ejabberd container image

The binary installers now include the recent and stable Erlang/OTP 26.2.2 and Elixir 1.16.1. Many other dependencies were updated in the installers, the most notable is OpenSSL that has jumped to version 3.2.1.

The ejabberd container image and the ecs container image have gotten all those version updates, and also Alpine is updated to 3.19.

By the way, this container image already had support to run commands when the container starts… And now you can setup the commands to allow them fail, by prepending the character !.

Summary of compilation methods

When compiling ejabberd from source code, you may have noticed there are a lot of possibilities. Let’s take an overview before digging in the new improvements:

  • Tools to manage the dependencies and compilation:
    • Rebar: it is nowadays very obsolete, but still does the job of compiling ejabberd
    • Rebar3: the successor of Rebar, with many improvements and plugins, supports hex.pm and Elixir compilation
    • Mix: included with the Elixir programming language, supports hex.pm, and erlang compilation
  • Installation methods:
    • make install: copies the files to the system
    • make prod: prepares a self-contained OTP production release in _build/prod/, and generates a tar.gz file. This was previously named make rel
    • make dev: prepares quickly an OTP development release in _build/dev/
    • make relive: prepares the barely minimum in _build/relive/ to run ejabberd and starts it
  • Start scripts and alternatives:
    • ejabberdctl with erlang shell: start/foreground/live
    • ejabberdctl with elixir shell: iexlive
    • ejabberd console/start (this script is generated by rebar3 or mix, and does not support ejabberdctl configurable options)

For example:
– the CI dynamic tests use rebar3, and Runtime tries to test all the possible combinations
– ejabberd binary installers are built using: mix + make prod
container images are built using: mix + make prod too, and started with ejabberdctl foreground

Several combinations didn’t work correctly until now and have been fixed, for example:
mix + make relive
mix + make prod/dev + ejabberdctl iexlive
mix + make install + ejabberdctl start/foregorund/live
make uninstall buggy has an experimental alternative: make uninstall-rel
rebar + make prod with Erlang 26

Use Mix or Rebar3 by default instead of Rebar to compile ejabberd

ejabberd uses Rebar to manage dependencies and compilation since ejabberd 13.10 4d8f770. However, that tool is obsolete and unmaintained since years ago, because there is a complete replacement:

Rebar3 is supported by ejabberd since 20.12 0fc1aea. Among other benefits, this allows to download dependencies from hex.pm and cache them in your system instead of downloading them from git every time, and allows to compile Elixir files and Elixir dependencies.

In fact, ejabberd can be compiled using mix (a tool included with the Elixir programming language) since ejabberd 15.04 ea8db99 (with improvements in ejabberd 21.07 4c5641a)

For those reasons, the tool selection performed by ./configure will now be:
– If --with-rebar=rebar3 but Rebar3 not found installed in the system, use the rebar3 binary included with ejabberd
– Use the program specified in option: --with-rebar=/path/to/bin
– If none is specified, use the system mix
– If Elixir not found, use the system rebar3
– If Rebar3 not found, use the rebar3 binary included with ejabberd

Removed Elixir support in Rebar

Support for Elixir 1.1 was added as a dependency in commit 01e1f67 to ejabberd 15.02. This allowed to compile Elixir files. But since Elixir 1.4.5 (released Jun 22, 2017) it isn’t possible to get Elixir as a dependency… it’s nowadays a standalone program. For that reason, support to download old Elixir 1.4.4 as a dependency has been removed.

When Elixir support is required, better simply install Elixir and use mix as build tool:

./configure --with-rebar=mix

Or install Elixir and use the experimental Rebar3 support to compile Elixir files and dependencies:

./configure --with-rebar=rebar3 --enable-elixir

Added Elixir support in Rebar3

It is now possible to compile ejabberd using Rebar3 and support Elixir compilation. This compiles the Elixir files included in ejabberd’s lib/ path. There’s also support to get dependencies written in Elixir, and it’s possible to build OTP releases including Elixir support.

It is necessary to have Elixir installed in the system, and configure the compilation using --enable-elixir. For example:

apt-get install erlang erlang-dev elixir
git clone https://github.com/processone/ejabberd.git ejabberd
cd ejabberd
./autogen.sh
./configure --with-rebar=rebar3 --enable-elixir
make
make dev
_build/dev/rel/ejabberd/bin/ejabberdctl iexlive

Elixir versions supported

Elixir 1.10.3 is the minimum supported, but:
– Elixir 1.10.3 or higher is required to build an OTP release with make prod or make dev
– Elixir 1.11.4 or higher is required to build an OTP release if using Erlang/OTP 24 or higher
– Elixir 1.11.0 or higher is required to use make relive
– Elixir 1.13.4 with Erlang/OTP 23.0 are the lowest versions tested by Runtime

For all those reasons, if you want to use Elixir, it is highly recommended to use Elixir 1.13.4 or higher with Erlang/OTP 23.0 or higher.

make rel is renamed to make prod

When ejabberd started to use Rebar2 build tool, that tool could create an OTP release, and the target in Makefile.in was conveniently named make rel.

However, newer tools like Rebar3 and Elixir’s Mix support creating different types of releases: production, development, … In this sense, our make rel target is nowadays more properly named make prod.

For backwards compatibility, make rel redirects to make prod.

New make install-rel and make uninstall-rel

This is an alternative method to install ejabberd in the system, based in the OTP release process. It should produce exactly the same results than the existing make install.

The benefits of make install-rel over the existing method:
– this uses OTP release code from rebar/rebar3/mix, and consequently requires less code in our Makefile.in
make uninstall-rel correctly deletes all the library files

This is still experimental, and it would be great if you are able to test it and report any problem; eventually this method could replace the existing one.

Just for curiosity:
– ejabberd 13.03-beta1 got support for make uninstall was added
ejabberd 13.10 introduced Rebar build tool and code got more modular
– ejabberd 15.10 started to use the OTP directory structure for ‘make install’, and this broke make uninstall

Acknowledgments

We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

And also to all the people contributing in the ejabberd chatroom, issue tracker…

Improvements in ejabberd Business Edition

Customers of the ejabberd Business Edition, in addition to all those improvements and bugfixes, also get:

Push

  • Fix clock issue when signing Apple push JWT tokens
  • Share Apple push JWT tokens between nodes in cluster
  • Increase allowed certificates chain depth in GCM requests
  • Use x:oob data as source for image delivered in pushes
  • Process only https urls in oob as images in pushes
  • Fix jid in disable push iq generated by GCM and Webhook service
  • Add better logging for TooManyProviderTokenUpdated error
  • Make get_push_logs command generate better error if mod_push_logger not available
  • Add command get_push_logs that can be used to retrieve info about recent pushes and errors reported by push services
  • Add support for webpush protocol for sending pushes to safari/chrome/firefox browsers

MAM

  • Expand mod_mam_http_access API to also accept range of messages

MUC

  • Update mod_muc_state_query to fix subject_author room state field
  • Fix encoding of config xdata in mod_muc_state_query

PubSub

  • Allow pubsub node owner to overwrite items published by other persons (p1db)

ChangeLog

This is a more detailed list of changes in this ejabberd release:

Core

  • Added Matrix gateway in mod_matrix_gw
  • Support SASL2 and Bind2
  • Support tls-server-end-point channel binding and sasl2 codec
  • Support tls-exporter channel binding
  • Support XEP-0474: SASL SCRAM Downgrade Protection
  • Fix presenting features and returning results of inline bind2 elements
  • disable_sasl_scram_downgrade_protection: New option to disable XEP-0474
  • negotiation_timeout: Increase default value from 30s to 2m
  • mod_carboncopy: Teach how to interact with bind2 inline requests

Other

  • ejabberdctl: Fix startup problem when having set EJABBERD_OPTS and logger options
  • ejabberdctl: Set EJABBERD_OPTS back to "", and use previous flags as example
  • eldap: Change logic for eldap tls_verify=soft and false
  • eldap: Don’t set fail_if_no_peer_cert for eldap ssl client connections
  • Ignore hints when checking for chat states
  • mod_mam: Support XEP-0424 Message Retraction
  • mod_mam: Fix XEP-0425: Message Moderation with SQL storage
  • mod_ping: Support XEP-0198 pings when stream management is enabled
  • mod_pubsub: Normalize pubsub max_items node options on read
  • mod_pubsub: PEP nodetree: Fix reversed logic in node fixup function
  • mod_pubsub: Only care about PEP bookmarks options when creating node from scratch

SQL

  • MySQL: Support sha256_password auth plugin
  • ejabberd_sql_schema: Use the first unique index as a primary key
  • Update SQL schema files for MAM’s XEP-0424
  • New option sql_flags: right now only useful to enable mysql_alternative_upsert

Installers and Container

  • Container: Add ability to ignore failures in execution of CTL_ON_* commands
  • Container: Update to Erlang/OTP 26.2, Elixir 1.16.1 and Alpine 3.19
  • Container: Update this custom ejabberdctl to match the main one
  • make-binaries: Bump OpenSSL 3.2.1, Erlang/OTP 26.2.2, Elixir 1.16.1
  • make-binaries: Bump many dependency versions

Commands API

  • print_sql_schema: New command available in ejabberdctl command-line script
  • ejabberdctl: Rework temporary node name generation
  • ejabberdctl: Print argument description, examples and note in help
  • ejabberdctl: Document exclusive ejabberdctl commands like all the others
  • Commands: Add a new muc_sub tag to all the relevant commands
  • Commands: Improve syntax of many commands documentation
  • Commands: Use list arguments in many commands that used separators
  • Commands: set_presence: switch priority argument from string to integer
  • ejabberd_commands: Add the command API version as a tag vX
  • ejabberd_ctl: Add support for list and tuple arguments
  • ejabberd_xmlrpc: Fix support for restuple error response
  • mod_http_api: When no specific API version is requested, use the latest

Compilation with Rebar3/Elixir/Mix

  • Fix compilation with Erlang/OTP 27: don’t use the reserved word ‘maybe’
  • configure: Fix explanation of --enable-group option (#4135)
  • Add observer and runtime_tools in releases when --enable-tools
  • Update “make translations” to reduce build requirements
  • Use Luerl 1.0 for Erlang 20, 1.1.1 for 21-26, and temporary fork for 27
  • Makefile: Add install-rel and uninstall-rel
  • Makefile: Rename make rel to make prod
  • Makefile: Update make edoc to use ExDoc, requires mix
  • Makefile: No need to use escript to run rebar|rebar3|mix
  • configure: If --with-rebar=rebar3 but rebar3 not system-installed, use local one
  • configure: Use Mix or Rebar3 by default instead of Rebar2 to compile ejabberd
  • ejabberdctl: Detect problem running iex or etop and show explanation
  • Rebar3: Include Elixir files when making a release
  • Rebar3: Workaround to fix protocol consolidation
  • Rebar3: Add support to compile Elixir dependencies
  • Rebar3: Compile explicitly our Elixir files when --enable-elixir
  • Rebar3: Provide proper path to iex
  • Rebar/Rebar3: Update binaries to work with Erlang/OTP 24-27
  • Rebar/Rebar3: Remove Elixir as a rebar dependency
  • Rebar3/Mix: If dev profile/environment, enable tools automatically
  • Elixir: Fix compiling ejabberd as a dependency (#4128)
  • Elixir: Fix ejabberdctl start/live when installed
  • Elixir: Fix: FORMATTER ERROR: bad return value (#4087)
  • Elixir: Fix: Couldn’t find file Elixir Hex API
  • Mix: Enable stun by default when vars.config not found
  • Mix: New option vars_config_path to set path to vars.config (#4128)
  • Mix: Fix ejabberdctl iexlive problem locating iex in an OTP release

Full Changelog

https://github.com/processone/ejabberd/compare/23.10…24.02

ejabberd 24.02 download & feedback

As usual, the release is tagged in the Git source code repository on GitHub.

The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity.

For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags.

The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs. The alternative ejabberd container image is available in ghcr.io/processone/ejabberd.

If you consider that you’ve found a bug, please search or fill a bug report on GitHub Issues.

The post ejabberd 24.02 first appeared on ProcessOne.

by Jérôme Sautret at April 01, 2024 14:59

March 31, 2024

Monal IM

iOS app banned from chinese appstore

Some may have predicted it, but now it happened: the chinese government banned Monal from the chinese appstore. Below is the complete email we got from Apple regarding this ban. We got that mail twice, once on Wed, 27 Mar 2024 15:46:18 +0100 and a second time on Thu, 28 Mar 2024 17:01:19 +0100.
The macOS version of Monal is still available in the appstore and with homebrew, though.

Here is the full mail, a translation of the CAC articles can be found over here for reference.


Hello,

We are writing to notify you that your application, per demand from the CAC (Cyberspace Administration of China), will be removed from the China App Store because it includes content that is illegal in China, which is not in compliance with the App Review Guidelines:

  1. Legal Apps must comply with all legal requirements in any location where you make them available (if you’re not sure, check with a lawyer). We know this stuff is complicated, but it is your responsibility to understand and make sure your app conforms with all local laws, not just the guidelines below. And of course, apps that solicit, promote, or encourage criminal or clearly reckless behavior will be rejected.

According to the CAC, your app violates Articles 3 of the Provisions on the Security Assessment of Internet-based Information Services with Attribute of Public Opinions or Capable of Social Mobilization (具有舆论属性或社会动员能力的互联网信息服务安全评估规定).

If you need additional information regarding this removal or the laws and requirements in China, we encourage you to reach out directly to the CAC (Cyberspace Administration of China).

While your app has been removed from the China App Store, it is still available in the App Stores for the other territories you selected in App Store Connect. The TestFlight version of this app will also be unavailable for external and internal testing in China and all public TestFlight links will no longer be functional.

Best regards,

App Review

March 31, 2024 00:00