Development

NestJS and Kafka

The Apache Kafka message queue easily integrates into NestJS servers, as described in the respective documentation. This article gives you a short overview of default communication behavior, some of the possible features and configurations and some good practices.

Overview of Basic Kafka Features in NestJS

NestJS Integration

  • Summary: NestJS facilitates the integration of Kafka, offering a streamlined approach for both message production and consumption within applications.
  • Default Behavior: Through the use of decorators and modules, NestJS abstracts much of the complexity involved in setting up Kafka clients, enabling straightforward message handling capabilities.
  • Configuration: Kafka can be configured at various points within a NestJS application, including during bootstrap for global settings, within the AppModule for application-wide settings, and within feature modules for localized settings. This flexibility allows for detailed control over consumer groups, error handling, and message retry strategies.
  • Bootstrap Initialization: Initializing Kafka in the bootstrap function is essential for setting up the Kafka microservice within the application context. It allows the app to connect to Kafka as a microservice. Global Kafka configurations, such as consumer group settings and error handling strategies.
    const app = await NestFactory.create(AppModule, …)
    app.connectMicroservice(kafkaOptions)
    await app.startAllMicroservices()
    await app.listen(4000, '0.0.0.0')
  • AppModule Initialization: You often see an application-wide Kafka configuration in the main AppModule, potentially overriding or complementing bootstrap settings. This makes no sense and is completely redundant with the anyway mandatory bootstrap-initialistion. So don’t do it.
    @Module({
      imports: [KafkaModule.register(kafkaOptions)]
    })
    export class AppModule {}
  • Feature Module Kafka Client Injection: Necessary when using kafka service injection to produce messages (this.kafka.emit(topic, data)) or when needing explicit control over the Kafka client in a specific module. When you’re only consuming messages using @EventPattern or @MessagePattern, without the need to explicitly produce messages within a service, the direct injection of Kafka (ClientKafka) might not be necessary. The @EventPattern and @MessagePattern decorators can be used in controllers or providers to handle incoming Kafka messages without the need for direct client injection.
    @Module({
      imports: [ClientsModule.register([{ name: 'kafka', kafkaOptions }])],
      providers: [XyzService],
      controllers: [XyzController],
      exports: [XyzService]
    })
    export class XyzModule {}
  • Service Injection: Kafka injection in a service requires the above mentioned featre module Kafka client injection. Then in the cnstructor of class XyzService, you can use the following pattern to get access to kafka client functions, namele this.kafka.emit. The name in
    @Inject('kafka') is arbirrary and must match name: 'kafka' in ClientsModule.register.

    constructor(@Inject('kafka') private kafka: ClientKafka) {}

One-to-Many Broadcast

  • Summary: Kafka’s model allows for broadcasting messages to multiple consumers. All consumers subscribed to a topic will receive messages sent to that topic.
  • Default Behavior: By default, all messages sent to a topic are broadcasted to all consumers subscribed to that topic.
  • Configuration: Configuration is managed at the consumer level by subscribing to topics.
  • Example:
    @Injectable()
    export class MyService {
      @EventPattern('myTopic')
      async handleBroadcastMessage(@Payload() message: any) {
        // Process message
      }
    }
    

Error Handling and Retries

  • Summary: In NestJS, unhandled exceptions during Kafka message processing lead to retries, affecting the message’s processing within its topic or potentially the entire client group if only a single processing thread is available.
  • Default Behavior: Throwing an exception in an event handler indicates to Kafka to retry the message. This may block further processing of the topic or the entire client group if it operates with a single thread.
  • Configuration: To manage retries and error handling more granularly, disable auto-commit and control offset commits manually, or use specific exceptions like `KafkaRetriableException` for controlled retry behavior.
  • Example:
    @EventPattern('requestTopic') handleRequest(data) {
      throw new Error() // This leads to a retry
    }
    
  • Good Practice Retry Pattern: Implementing a manual retry mechanism by re-emitting the failed message back to its topic can serve as a pragmatic approach to ensure that processing attempts continue without indefinitely blocking the queue. This pattern, however, is best suited for scenarios where message order is not paramount.
    @EventPattern('requestTopic') handleRequest(data) {
      try {
        // Perform the required processing
      } catch (e) {
        this.kafka.emit('requestTopic', data) // re-add to the back of the queue
      }
    }
    

Auto-Commit vs. Manual-Commit

  • Summary: Kafka supports both auto-committing offsets and manual offset management.
  • Default Behavior: Auto-commit is enabled by default, committing offsets at a configured interval.
  • Configuration: To switch to manual commit, disable auto-commit and manually manage offset commits.
  • Example:
    // Disable auto-commit
    consumerConfig = {
      ...consumerConfig,
      allowAutoCommit: false
    }
    

Consumer Groups

  • Summary: Kafka distributes messages among consumers in the same group, ensuring a message is processed once per group.
  • Default Behavior: Consumers in the same group share the workload of message processing.
  • Configuration: Different consumer groups can be set up to receive messages independently.
  • Example:
    const consumerConfig = {
      groupId: 'myUniqueGroup' // Unique group for independent consumption
    }
    

Historical Messages

  • Summary: New consumers can catch up with all missed messages since their last offset or from the beginning of the log.
  • Default Behavior: Consumers start consuming from their last known offset.
  • Configuration: Set auto.offset.reset to earliest to consume from the beginning if no offset is stored.
  • Example:
    const consumerConfig = {
      ...consumerConfig,
      autoOffsetReset: 'earliest'
    }
    

Return Value in @EventPattern and @MessagePattern

  • Summary: Return values in message handlers don’t influence the message flow in one-way communication patterns.
  • Default Behavior: Return values are generally ignored unless in a request-reply pattern.
  • Configuration: Implement explicit messaging for request-reply patterns.
  • Example:
    @MessagePattern('requestTopic') handleRequest() {
      // Process and return response
      return {data: 'response'} // this makes no sense
    }
    

Bidirectional Communication Pattern

  • Summary: Kafka primarily supports asynchronous communication, but can be configured for request-reply patterns by emitting to a previously agreed response topic.
  • Default Behavior: Asynchronous message broadcasting to multiple consumers.
  • Configuration: Use reply-to topics and correlation IDs for request-reply communication.
  • Example: The response is received by all consumer groups registered to the topic given in message.replyTo.
    // Producer sending a request
    this.kafka.emit('requestTopic', {
      data: 'request',
      replyTo: 'responseTopic'
    })
    
    // Consumer processing and replying
    @EventPattern('requestTopic') processRequest(message) {
      this.kafka.emit(message.replyTo, {
        data: 'response'
      })
    }
    
  • Example: To limit the response to be sent only to the same group as the request has been sent, you may add the group name in the request parameters and add it to the response’s topic. Be aware, that this is your convention not a security feature. Kafka offers Access Control Lists (ACL) if you need real access restrictions.
    // Producer sending a request with group id
    this.kafka.emit('requestTopic', {
      data: 'request',
      replyTo: 'responseTopic',
      consumerGroup: 'senderGroup'
    })
    
    // Consumer processing and replying
    @EventPattern('requestTopic') processRequest(message) {
      this.kafka.emit(`${message.consumerGroup}-${message.replyTo}`, {
        data: 'response'
      })
    }
    

Data Retention and Scaling

  • Summary: Kafka allows configurable message retention, supporting scalability by adding more consumers.
  • Default Behavior: Messages are retained for a default period, with scalability limited by topic partitions.
  • Configuration: Adjust retention settings and partition counts to scale and maintain messages as needed.
  • Example:
    # Kafka CLI to adjust retention period
    kafka-configs.sh --alter \
                     --entity-type topics --entity-name myTopic \
                     --add-config retention.ms=172800000
    

Good Practice in Microservices

A well-adopted design pattern in microservices architecture involves assigning each microservice its own unique group ID, ideally derived from the service’s name. This approach significantly benefits the scalability and reliability aspects of microservices, especially when deployed in cloud environments where multiple replicas of the same service might be instantiated to handle increased load or ensure high availability.

By default, assigning a unique group ID to each microservice ensures that messages are processed just once by one of the service’s replicas. This behavior aligns with the typical requirements of distributed systems, where duplicate processing of messages is undesirable. Should the processing of a message fail, resulting in an exception, the default Kafka behavior ensures the message is retried until successfully processed by one of the clients. This mechanism usually matches the desired behavior, it follows the requirements of the twelve-factor app and can be implemented effortlessly.

However, it’s crucial to recognize that the message queue may become stuck if an unresolvable error occurs, preventing further message processing. Therefore, it’s important to differentiate between recoverable and unrecoverable errors in your code. Unrecoverable errors often stem from coding mistakes or incorrect configurations. In such scenarios, rigorous testing of the software becomes indispensable.

Identifying and handling unrecoverable errors properly ensures that the system can degrade gracefully or alert the necessary operations personnel to intervene manually. Implementing robust error handling and logging mechanisms can aid in quickly diagnosing and rectifying such issues, minimizing downtime and improving the overall resilience of the microservices architecture.

In summary, careful consideration of group ID assignment, coupled with effective error handling strategies, lays the foundation for a scalable, reliable, and maintainable microservices ecosystem. Rigorous testing plays a crucial role in ensuring that the system behaves as expected under various conditions, thereby safeguarding against potential failures that could lead to message processing stalls.

Development

Separation of Style and Content — Why MUI Sucks

In the rapidly evolving world of web development, the ongoing debate over best practices for designing and structuring applications is more relevant than ever. One focal point of this debate is the practice of integrating styling directly within JavaScript components, an approach popularized by libraries such as Material-UI (MUI). MUI, along with similar frameworks, provides developers with a comprehensive suite of React components that conform to the Material Design guidelines, offering a seemingly quick path to prototyping and interface building. This convenience, however, may come at a significant cost, impacting not just code verbosity but also challenging the core web development principles of maintainability, scalability, and the crucial separation of content and presentation.

By blending the concerns of styling and logic within the same code constructs, such practices raise substantial questions about the long-term implications for web projects. While they promise speed and visual consistency out of the box, they necessitate a closer examination of how these benefits weigh against the potential for increased complexity and the dilution of foundational web standards.

LaTeX: A Standalone Beacon of Separation

LaTeX, a high-quality typesetting system, is a powerful exemplar of the importance of separating content from design. Originating from TeX, a typesetting system developed by Donald Knuth in the late 1970s, LaTeX was later extended by Leslie Lamport to make TeX more accessible and to support a higher level of abstraction. This evolution allows authors to focus solely on the content, freeing them from the intricacies of formatting. As a result, their work is presented consistently and professionally, with LaTeX handling the complex layout tasks invisibly. This separation ensures that the essence of the document remains distinct and untangled from its visual presentation, embodying the principle that good design should facilitate content, not obstruct it.

LaTeX is particularly revered in academic and scientific communities for its precision and efficiency in handling documents that contain complex mathematical expressions, bibliographies, and cross-references. It has become the de facto standard for many scientific publications, thesis documents, and conference papers. Its ability to produce publication-quality texts makes it an indispensable tool for researchers and academics worldwide, further showcasing the timeless value of distinguishing between the substance of one’s work and the manner in which it is visually rendered.

Office Templates: A Parallel in Document Writing

In the corporate world, the principle of separating content from its presentation finds a practical application through the use of templates in office suites such as Microsoft Office, Google Docs, and LibreOffice. These software solutions offer a variety of templates that empower users to concentrate on delivering their core message, while relying on pre-designed styles to ensure that documents adhere to a consistent and professional appearance. This functionality not only streamlines document creation but also elevates the quality of output by abstracting the complexities of design.

Despite the availability of these powerful tools, the effective use of templates remains underutilized in many business environments, leading to inefficiencies and a lack of standardization across documents produced within the same organization. The disparity between the potential for streamlined, professional document creation and the reality of inconsistent application underscores a broader challenge in corporate document management. But that’s a whole different story. Nevertheless, the concept of using templates as a means to separate content from presentation underscores a fundamental principle shared across fields ranging from digital publishing to web development: the true value of content is most fully realized when it is presented clearly and without unnecessary complication by design elements.

The Semantic Web: A Foundation Forgotten

The web has long embraced the principle of separation of concerns — a guideline advising that different aspects of application development, such as content, presentation, and behavior, be managed independently. This principle is not arbitrary; it is the culmination of decades of experience and evolution. From the early days of inline styles and table-based layouts to the adoption of CSS for styling, the web’s history is a testament to the ongoing effort to create more maintainable, accessible, and flexible ways to build the web.

The foundation of the web is built on HTML – a language designed to structure content semantically. This means that tags such as <button>, <header>, <article> or <footer> are not just stylistic choices but convey the meaning and role of the content they encapsulate. This semantic approach is vital for accessibility, search engine optimization, and maintainability.

CSS was introduced to separate the concerns of styling from content structure, allowing HTML to focus on content and semantics, and CSS to manage presentation. This separation is a cornerstone of web development best practices, ensuring that content is accessible and usable across different devices and by users with diverse needs.

The Pitfalls of Mixing Style and Content

Breaking Consistency

One of the strongest arguments against embedding style directly within components, as is common in MUI, is the risk to consistency. Components scattered across a project may be styled differently due to the variability of inline styling or prop-based design adjustments. This piecemeal approach can lead to a fragmented user interface, where similar elements offer differing user experiences.

High Maintenance Costs

While mixing design and content can expedite prototyping, it introduces significant long-term maintenance challenges. Styles tightly coupled with logic are harder to update, especially when design changes require navigating through complex component structures. This can lead to a bloated codebase, where updates are slow and error-prone.

The Designer-Developer Handoff

The collaboration between designers and developers is crucial to the success of any project. By mixing styles with component logic, we blur the lines of responsibility, potentially leading to confusion and inefficiencies. Designers are experts in creating user experiences, while developers excel at implementing functionality. The separation of concerns respects these specializations, ensuring that both can work effectively towards a common goal without stepping on each other’s toes.

The Problem with MUI’s Approach

MUI, while offering a rich set of components for rapid development, often blurs the lines between content structure and presentation. This is evident in the verbosity and explicit styling present within component definitions. Consider the following MUI example:

import React from 'react'
import Grid from '@mui/material/Grid'
import Typography from '@mui/material/Typography'
import Button from '@mui/material/Button'
import {Link} from 'react-router-dom'

function MyComponent() {
  return (
    <Grid container spacing={2}>
      <Grid item xs={12} sm={6}>
        <Typography variant="h1" gutterBottom>
          Welcome to My App
        </Typography>
        <Typography variant="body1">
          Get started by exploring our features.
        </Typography>
        <Button variant="contained" color="primary" component={Link} to="/start">
          Get Started
        </Button>
      </Grid>
    </Grid>
  )
}

In this snippet, the presentation details are deeply intertwined with the component’s structure. It is full of complexity, such as spacing={2}, xs={12}, sm={6} introduce arbitrary numbers without any context. The only reason for Grid and Typography elements is influencing the appearance, they have no semantics. This kind of pseudo-components should never be used. The properties spacing, xs, sm, variant, gutterBottom, color, and contained dictate the appearance directly within the JSX. This again violates the principle of separating style and content, leading to a scenario where changing the design necessitates modifications to the component code. So the react MUI library is the worst front-end library I have ever seen.

Advocating for a More Semantic Approach

Contrast the MUI example with an approach that adheres to the separation of concerns principle. Instead of mixing appearance and content, the full example above can be replaced by a simple standard HTML button within some semantic context, such as a navigation. First you either use an existing library, or you simply define your components, this is a sample for a clean and properly designed component:

import React from 'react'
import {Link} from 'react-router-dom' 

function ButtonLink({to, children}) {
  return <Link className='button' to={to}>{children}</Link>
}

Then you just use your component. Please note that outside of the definition of basic components, you must not use className or any other attribute that defines semantics or styling. Define base components for this purpose, then all remaining attributes, such as to, have a fully functional meaning. The resulting code is very clean and simple, so it is easy to read and maintain:

import React from 'react'
import {ButtonLink} from '@my/components'

function AppHeader() {
  return (
    <header>
      <p>Welcome to My App</p>
      <p>Get started by exploring our features.</p>
      <ButtonLink to='/start'>Get Started</ButtonLink>
    </header>
  )
}

Here you immediately see the content, so you can focus on the relevant parts.

For the look and feel, just apply some styling, which needs to be written only once in a central CSS style file, something like e.g.:

header {
  display: flex;
  justify-content: space-between;
}
button, a.button {
  color: white;
  background-color: blue;
  padding: 1ex;
  border: .1ex solid black;
  border-radius: .5ex;
  cursor: pointer;
}

In this simple example, CSS styles the layout inside of your <header>-tag, which replaces all that <Grid> and <Typography> nonsense, moreover the <button> tag and links in button style are both styled identically using CSS, ensuring that all button like elements across the application maintain a consistent appearance without requiring explicit style definitions in the code. This not only reduces redundancy but also aligns with the semantic nature of HTML, where the tag itself carries meaning.

Furthermore, thanks to the separation of styling and content, a designer can write the CSS and give you basic HTML layout rules, then the developers can focus on the content, instead of having to pay attention to the look and feel.

Please refer to our post Write a Common CSS Style Library for more details on how we suggest to structure your front-end libraries by separating styles from components, templates and content.

The Real Cost of Convenience

While MUI and similar libraries offer rapid development capabilities, they do so at the expense of long-term maintainability, scalability, and adherence to web standards. The explicit declaration of styles and layouts within JSX components leads to a verbose codebase that is harder to maintain and less accessible.

The additional typing and complexity introduced by such frameworks can obscure the semantic nature of the web, making it more challenging to achieve a clean, maintainable, and accessible codebase. This is contrary to all best practices and conflicts with the evolution of web standards, which have consistently moved towards a clear separation of content and presentation.

Embracing Standards for a Sustainable Web

The allure of quick development cycles and visually appealing components cannot be underestimated. However, as stewards of the web, developers must consider the long-term implications of their architectural choices. By embracing HTML’s semantic nature and adhering to the separation of concerns principle, we can build applications that are not only maintainable and scalable but also accessible to all users.

As the web continues to evolve, let’s not forget the lessons learned from its history. Emphasizing semantics, maintaining the separation of content and presentation, and adopting standards-based approaches are crucial for a sustainable, accessible, and efficient web.

Defending Separation of Style and Content

Critics of separating style from content may argue that modern web development practices, like CSS-in-JS, enhance component re-usability, enable dynamic styling, and streamline the development process by colocating styling with component logic. However, adhering to the separation of style and content principle offers significant long-term benefits. It enhances maintainability by allowing changes in design without altering the underlying HTML structure or JavaScript logic. This separation fosters accessibility and scalability, ensuring that websites and applications can grow and adapt over time without becoming entangled in a web of tightly coupled code. Additionally, it aligns with web standards and best practices, promoting a clear organizational structure that benefits developers and designers alike. By maintaining this separation, developers can leverage the strengths of CSS for styling, HTML for structure, and JavaScript for behavior, leading to a more robust, flexible, and accessible web.

For those inclined to integrate styling within React, an advisable approach is packaging styles into a dedicated Style and Component Library. This library should encapsulate the styling based on the Corporate Identity, allowing the actual code to utilize components devoid of additional styling. This methodology garners benefits from both paradigms. However, it’s crucial to note that this often falls short in meeting accessibility standards and restricts the styling’s applicability outside the chosen framework (e.g., React or Angular). In contrast, segregating styling from HTML via CSS and subsequently crafting components ensures technological independence, enabling the same styling to be utilized in diverse contexts like a PHP-based WordPress theme, showcasing its versatility across various platforms.

Development Pacta

Git Submodule from Existing Path with Branches

This article will show you how you can migrate an existing path within an existing project into a new project, then add that new project as submodule to the original project, while keeping all tags and branches. Typically, branches are lost in migration, but not with this little addition.

Fill a New Repository With a Complete Sub Path of Another Repository

You clone the original project, then filter to the path you want to extract, change the origin and push everything to the new origin. But with only that, this will not copy all the existing branches. Only if you first checkout all branches, then they will also be pushed to the new location. This is what the for-loop does for you.

git clone git@server.url:path/to/original-project.git
cd original-project
git filter-branch --tag-name-filter cat --subdirectory-filter path/to/submodule -- --all
for b in $(git branch -r); do
    git checkout ${b#origin/}
    git reset --hard $b
    git fetch
    git reset --hard ${b#origin/}
done
git remote remove origin
git remote add origin git@server.url:path/to/new-sub-project.git
git push origin --all

Now you have a new repository that contains only a part of the previous one.

Replace a Path with a Submodule

Next step is to replace the path in the old repository by the new repository as a submodule. Clone the original project again (delete the previous clone).

git clone git@server.url:path/to/original-project.git
cd original-project
git rm -rf path/to/submodule
git commit -am "remove old path/to/submodule"
git submodule add git@server.url:path/to/new-sub-project.git path/to/submodule

Now you have replaced path/to/submodule by a new submodule.

Development

Even Longer Way To a Stable Cloud

It is important to control your own data on your own servers. A «cloud» is nothing but running your data on someone else’s computer, and thus they control your data, not you. Encryption may help a little bit, but the best solution is to fully own your data and infrastructure. This is also the philosophy of Pacta Plc: We do not store data on hardware that we do not own. We may use external clouds or servers, but not where customer data is involved. We protect your data.

Running a full stable infrastructure is not a simple task, and there is much to learn. So here you find the story of my adventures when learning and setting up a full cloud infrastructure, as it is also currently used by Pacta Plc.

History

1995 – 2010 Outdated Workstations as Server

I’m running my own server for my websites, e-mail, chat and much more since the nineties. It was always on a single computer with many services in a shared bare metal environment. In the beginning, I ran the server on my workstation, then on an old Siemens workstation that I inherited from my employer.

2011 Root Server and HP ProLiant Micro Server

Later I hired a root server and in 2011 I bought my first dedicated HP ProLiant micro server. Up to now, I have three of them. On those machines, processes were directly started with no virtualization.

2015 Docker

In 2015, I started my first experiments with Docker and started to migrate my services from the standalone bare metal servers to Docker containers. For this, I built dozens of images.

Constantly Extending Hard Disks

Over time the HP ProLiant server became brothers and sisters, so that I currently have three of them, plus two huge Servers, storing more and more data. Whenever there is no more space on the hard disk, in Linux, you can easily extend it or replace a drive without downtime, only with a single reboot.

2017 – 2018 From Kubernetes to Docker Swarm

Then in 2017, Kubernetes came in to my focus. But that’s totally overcomplicated. With Docker Swarm there is a much simpler and much more stable solution available. There’s no need for Kubernetes nor OpenShift, unless you want to loose your time. So in 2018 I’ve setup a Docker Swarm on some cheap PC-Engines mini workers.

2017 GlusterFS

But with a swarm solution, there is need for a distributed cluster filesystem, so I came across GlusterFS, which turned out to be a complete disaster. At the beginning, it was very promising, but later, when filled with terrabytes and terrabytes of data, it became slow and very unstable.

2018 LizardFS

So I started a research which pointed me to LizardFS. The Result is much more stable than GlusterFS, but still slow. Other than for GlusterFS, the LizardFS development team was really helpful and assisted me in getting it up relatively fast and stable. But especially the masters tend to require huge amounts of memory. That’s why I bought a large HP and a large Dell server as master and backup master server. The whole LizardFS now holds 90TB of data.

2020 CephFS

Since about 2020, I experiment with CephFS, which is my currently proposed cluster file system. You can run it on PC-Engine APU hosts with 4GB RAM. For the OSDs, put in a mSata HD of 1TB or 2TB. Chose at least three OSDs and thre manager nodes. You cannot run OSDs and Managing nodes on the same device, because 4GB RAM is not enough, but you can run MON, MGR and MDS server on the same node.

2021 CephFS OSD Desaster

Wacky Initial Setup

My CephFS initially ran on three PC-Engines with 4GB RAM, where all had a 1TB mSATA SSD and should run as OSD that provides the disk to the cluster. In addition, one was setup as MGR, MON. MDS metadata and monitoring server, but for these two duties, 4GB RAM is not enough. So I initially fixed the problem by restarting the OSD on the management host every hour. This way, it was more or less stable. Later in 2020, I stopped the OSD on that host, lost one of three redundancies, but got a stable system. I then bought four more devices, three together with a 2TB mSATA SSD from AliExpress each, and one as separate monitoring server.

First OSD Fails

Unfortunately, before I had the time to add the new nodes to the network, there was a power failure, the UPS could not catch, and after rebooting, the BlueStore in one of the remaining two OSDs was corrupt. With only one OSD left, the full filesystem degraded and was offline. So I added the third OSD again, then the recovery started in this constellation on Monday and finished on Thursday. But all data was back, the filesystem up and running again.

Second OSD Fails

But when I then tried to add the new hosts, I learned, that the were incompatible: The old nodes run on Ubuntu 18.04 that comes with Ceph 12 Luminous, the new already on Ubuntu 20.04 with Ceph 15 Nautilus. So they were incompatible and could not talk to each other. In a first step, I upgraded the old Ceph installations to 14 Octopus, but they were still incompatible. Unfortunately, in the upgrade process, one of the remaining two OSD corrupted, one succeeded, one failed. And I was in the same position, as one week earlier, only one OSD left. So I downgeaded the new hosts to Ubuntu 18.04, upgraded to Ceph 14 and added one of the new OSDs to get back to a factor two. The recovery started once again on Monday and finished on Saturday. During the week, I added the remaining hosts to get full redundancy.

Problem Solved?

Currently, one OSD node is still broken, so five of six OSDs are now running, In addition, I bought two additional hosts to also run the management servers in redundancy. Now the system is up and running and stable, with enough redundancy.

The learnings from this: Never run less than three OSDs, and run the monitors on a separate device, or even better also on three devices. Replace a failed OSD immediately, before the whole system degrades.

2021 CephFS MDS: Five Month to Recover

Update on 2021/10/22: Since Wednesday the services are finally fully back, after a downtime of five month due to a recovery of the CephFS Meta Data Server. Our mail server was hosted on an external provider since years. This configuration was still a classcial setup, no dockerisation, no cloud. Just an installation of postfix and dovecot on a dedicated server with encrypted hard disks. That worked well, but then, after a power outage, the hard disk could no more decrypt. So I decided to restore the backup into the CephFS. Unfortunately, the 4GB RAM per machine were just enough for the normal workload. But when I unpacked a backup of about 50GB mails, the filesystem completely screwed up. Then the MDS was in recovering state. That used too much memory on the OSDs and on the MDS, so that they  crashed and restarted after a couple of minutes. I tried to recover that way, but iiit didn’t finish after a week, not after two, not after three. Since I could not add more RAM to PC-Engines, I bought 4 HPE MicroServer Gen10 Plus servers with a simple fast USB Stick for the Ubuntu operating system, 3 as OSD with 8GB RAM and a Western Digital 10TB harddisk each, and one as MON, MGR and MDS with 64GB RAM. They were delivered begin of September. The OSDs are fine, the recovery of the MDS was successful, but the rejoin state again used too much memory: Every 4min the server crashed. Unfortunately, the HPE MicroServer has only two slots for max. 16GB RAM each. Finally, I abused my LizardFS master server which has 160GB RAM as an additional MDS server. With that configuration, recovery was completed over night!

Learnings

CephFS is stable enough to recover from this kind of disasters  Add enough RAM! The cheap PC-Engines with 4GB RAM are not enough. 8GB RAM seem to be enough for an OSD with 10TB harddisk. 64GB RAM is normally enough for MON, MGR and MDS in normal operation, but not necessarily in during disaster recovery. Since for MDS only one server is running and the others are standby, adding just one MDS with high memory during disaster recovery solves the problem. For the MDS and all other services, add a memory limit short below of your system memory, e.g. MemoryLimit=55G in section [Service] of /lib/systemd/system/ceph-mds@.service.

Later I’ll add one or two more manager nodes and eventually three more OSDs to expand the storage.

Current Status and Recommendation

So my current stable cloud runs 10 dedicated Docker Swarm nodes, one manager and nine worker, and is backed by a five node CephFS, two manager and three OSDs. For setting up all these nodes, I use Ansible. All in all, there are 19 PC-Engine nodes, 3 HP ProLiant, 4 HPE MicroServer Gen 10 plus, a large HP and a Dell. Only my huge amount of multimedia data, terabytes of scanned documents, family photos and home videos, is stored in LizardFS, all other data is now stored in CephFS:

Development

Write a Common CSS Style Library

As a company that creates various products and services, mostly web applications and web based services, it is very important for Pacta Plc to have clear company identity with common look and feel in all our products and public appearances. Therefore our first step was to get official company corporate brand identity guidelines, a so called corporate identity CI. After having received our guidelines, the same designer was hired for creating a landing page. The result is what you now can see as our Pacta.Swiss corporate page. What the designer delivered, was only a sketch of the final result, which had to be implemented in HTML and CSS. Pacta styling follows best practices:

  • Pure CSS3 + HTML5.
  • No JavaScript for the basic layout. JavaScript is used in the basic design only to obfuscate the email address.
  • Clear separation of styling and structure.
  • All dimensions are relative to context (%), font (rem, em, ex) or browser size (vh, vw).
  • No absolute dimensions (no px allowed).
  • Initial font size is not defined, so the user’s configuration is respected.
  • No inline styling in HTML elements (no style=).
  • Styles are attached to HTML by element name or class.

Basic technical decisions:

  • Build environment is npm.
  • CSS is generated using Stylus processor.
  • Supported development environment is Linux only.

Initial Project Setup

We created an project pacta/style, that contains a file package.json and a folder for fonts, images, scripts and stylus.

package.json

The file package.json just holds the basics for CSS from Stylus:

  • A script npm install to install Stylus.
  • A script npm run-script build to build CSS.
  • A script npm start to rebuild when a file changed . inotifywait is a Linux tool to monitor file system changes.
{
  "name": "pacta-style",
  "version": "1.0.0",
  "dependencies": {
    "stylus": "^0.54.7"
  },
  "scripts": {
    "build-css": "stylus -c stylus/ --out css",
    "run-css": "while inotifywait -qq -e modify -r style/stylus; do npm run build-css; done",
    "build": "npm run build-css",
    "start": "npm run build-css && npm run run-css"
  }
}

All Stylus sources are in directory stylus and CSS targets are generated to directory css. This pacta/style project is included as git submodule in all other projects.

root.styl

To get a consistent look and to keep it easy to change basic settings, there are CSS variable definitions e.g. for the color palette, for basic spacing,  for the border style or for the shadow definition.

:root
    --blue100 #002b5e
    --blue90 #013466
    --blue80 #034072
    --blue70 #034b7c
    --blue60 #045a89
    …
    --border 0.1em solid var(--blue100)
    --shadow 0 0.25rem 0.25rem 0 rgba(0, 0, 0, 0.14),
        0 0.5rem 0.1rem -0.25rem rgba(0, 0, 0, 0.12),
        0 0.1rem 0.75rem 0 rgba(0, 0, 0, 0.2)
    …

grid.styl

The styles define all objects in all resolutions based on element name or class, such as grids or cards:

.grid2, .grid3, .grid4
  --grid-size 1fr
  display grid
  grid-template-columns: repeat(var(--grid-columns), var(--grid-size))
  grid-auto-rows: min-content 
.grid2
  --grid-columns 2
.grid3
  --grid-columns 3
.grid4
  --grid-columns 4
@media all and (max-width: 120rem)
    .grid4
        --grid-columns 2
@media all and (max-width: 80rem)
    .grid2
        --grid-columns 1
@media all and (max-width: 60rem)
    -grid4, .grid4
        --grid-columns 1

card.styl

.card
    display: grid
    grid-template-columns: auto 1fr
    border: var(--border)
    shadow: var(--shadow)
    width: calc(100% - 2em)
    .icon
        background-color: var(--blue100)
        width: 3rem
        height: 3rem
        border-radius: 50%
        svg, object, img
            width: calc( 100% - .2em )
            height: calc( 100% - .2em )
    > .heading
        background-color: var(--heading-bg)
        &, h1, h2, h3, h4, h5, h6
            color: var(--heading-color)
    .content
        display: flex
        flex-flow: column nowrap
        margin: .5em
        width: calc(100% - 1em)

The Landing Page

Our company landing page is Pacta.Swiss, where the company and its products are introduced. This is implemented as a static HTML page using the generated CSS. In fact, two pages, in English and German. The matching language is set from the browser through HTTP content negotiation by an Nginx server. This page’s implementation looks like:

  <body>
    <header>
      <div class="logo">
        <img src="style/images/logo.svg" alt="" /><span>Pacta AG</span>
      </div>
      <div>
        <nav class="social">
          <a>…</a>
          <a>…</a>
        </nav>
      </div>
    </header>
    <main>
      …
      <div class="container">
        <h2>…<span class="subtitle">…</span></h2>
        <div class="grid6">
          <div class="card">
            <div class="icon"><svg>…</svg></div>
            <div class="content">
              <h3>…<span class="subtitle">…</span></h3>
              <p>…</p>
              <div class="bottom"><a>…</a></div>
            </div>
          </div>
          <div class="card" disabled>
            <div class="icon"><svg>…</svg></div>
            <div class="content">
              <h3>…<span class="subtitle">…</span></h3>
              <p>…</p>
              <div class="bottom"><a>…</a></div>
            </div>
          </div>
          …
          </div>
        </div>
      </div>
      <div class="to-inverse" />
      <div class="inverse">
        <div class="container">
          <h2>…<span class="subtitle">…</span></h2>
          …
        </div> 
        …
      </div>
    </main>
    <footer>…</footer>
  </body>

React Components

Our software consists of Progressive Web Applications written in ReactJS. So we need a react component library. For this, we simply created another git project pacta/components that only contains a large amount of JavaScript React Component files and is included as git submodule into all development projects. Based on the work above, it is very easy to implement React Components, just define the parameters and return the necessary HTML structure.

Grid.js

This is the full definition of our grid layout, where you can specify size as the maximum number of grid columns. The actually shown number of grid columns depends on the browser width, as defined in the CSS you see in the snipped above:

import React from 'react';
import PropTypes from 'prop-types';

export default class Grid extends React.PureComponent {
  static propTypes = {
    children: PropTypes.oneOfType([
      PropTypes.array,
      PropTypes.object,
      PropTypes.string
    ]),
    size: PropTypes.oneOfType([PropTypes.string, PropTypes.number]),
    type: PropTypes.string,
  };
  render = () => (
    <div
      className={
        'grid' +
        this.props.size +
        (this.props.type ? ' ' + this.props.type : '')
      }
    >
      {this.props.children}
    </div>
  );
}

Card.js

A card may have an icon and a heading:

import React from 'react';
import PropTypes from 'prop-types';
import MdiIcon from '@mdi/react';

export default class Card extends React.PureComponent {
  static propTypes = {
    children: PropTypes.oneOfType([
      PropTypes.array,
      PropTypes.object,
      PropTypes.string
    ]),
    type: PropTypes.string,
    icon: PropTypes.oneOfType([PropTypes.string, PropTypes.object]),
    heading: PropTypes.oneOfType([PropTypes.string, PropTypes.object])
  };
  heading = () =>
    typeof this.props.heading === 'string' ? (
      <h2>{this.props.heading}</h2>
    ) : (
      this.props.heading
    );
  render = () => (
    <div
      className={
        'card ' + (this.props.type || '') + (this.props.icon ? '' : ' noicon')
      }
    >
      {this.props.icon ? (
        <div
          className={'icon' + (this.props.type ? ' ' + this.props.type : '')}
        >
          {typeof this.props.icon === 'string' ? (
            <MdiIcon path={this.props.icon} />
          ) : (
            this.props.icon
          )}
        </div>
      ) : this.props.heading ? (
        <div className="heading">{this.heading()}</div>
      ) : (
        <></>
      )}
      {(this.props.children || (this.props.heading && this.props.icon)) && (
        <div className="content">
          {this.props.heading && this.props.icon ? this.heading() : <></>}
          {this.props.children}
        </div>
      )}
    </div>
  );
}

Usage Example

As a usage example for the above samples, here is a snippet from the landing page on Pacta.Cash:

class LandingPage extends React.Component {
  …
  render = () => (
    <>
      <StepsToCoin current={this.props.current} />
      <Container>
        <h2>
          {this.props.t("landingpage.titlewhy")}
          <span className='subtitle'>
            {this.props.t("landingpage.subtitlewhy")}
          </span>
        </h2>
        <Grid size='4'>
          <Card icon={this.FAQ}>
            {this.props.t("landingpage.wheretouse")}
            <p className='bottom'>
              <button disabled>{this.props.t("landingpage.more")}</button>
            </p>
          </Card>
          <Card icon={this.FAQ}>
            {this.props.t("landingpage.investment")}
            <p>
              <img src={ChartImage} alt='Ethereum chart of one year' />
            </p>
            <p className='bottom'>
              <button disabled>{this.props.t("landingpage.more")}</button>
            </p>
          </Card>
          <Card icon={this.FAQ}>
            {this.props.t("landingpage.privacy")}
            <p className='bottom'>
              <button disabled>{this.props.t("landingpage.more")}</button>
            </p>
          </Card>
          <Card icon={this.FAQ}>
            {this.props.t("landingpage.independence")}
            <p className='bottom'>
              <button disabled>{this.props.t("landingpage.more")}</button>
            </p>
          </Card>
        </Grid>
      </Container> 
      …
  }
}

WordPress Template

Last but not least, our blog is written in WordPress, so Pacta also needs a WordPress template in the same style. Here we do the same, as with the React Component library, only that the template is now written in PHP instead of NodeJS.

wordpress.styl

There is only a very small additional Style file for WordPress specific definitions. All other definitions are comonnly shared:

html body #wpadminbar
    height: 46px
    width: 100%

.wp-type
    margin: 0 1em
    display: flex
    flex-flow: row nowrap
    justify-content: space-between

index.php

A WordPress template requires at least an index.php file, so let’s show this as an example:

<?php get_header() ?>

<?php if (have_posts()) : while (have_posts()) : the_post(); ?>

<?php if (has_post_thumbnail()) : ?>
<div class="cropped-image">
    <?php the_post_thumbnail('full') ?>
</div>
<?php endif ?>

<div class="wp-type">
    <div>
        <?php the_category(' ', ' → ') ?>
    </div>
    <div>
        <?php the_tags('', ' ', '') ?>
    </div>
</div>
</div>
<div class="container">
    <article>
        <h1><?php the_title() ?></h1>
        <?php the_content() ?>
    </article>
</div>

<?php endwhile; ?>
<?php endif; ?>
<?php get_footer() ?>
Pacta

Pacta.Cash

This is the easiest crypto wallet on the market. Manage your Ethers and Bitcoins securely without having to understand the technical details. You own the keys, all data is stored on your device. Trade without registration.

Pacta.Cash

Pacta

Pacta.Swiss

Company representation page of the Swiss Pacta Corporation Pacta Plc. This page is provided by Pacta Plc (in German: Pacta AG).

Pacta.Swiss

Pacta PagesServices and Pages by Pacta AG

Pacta.Cash

This is the easiest crypto wallet on the market. Manage your Ethers and Bitcoins securely without having to understand the technical details. You own the keys, all data is stored on your device. Trade without registration.

Pacta.Swiss

Company representation page of the Swiss Pacta Corporation Pacta Plc. This page is provided by Pacta Plc (in German: Pacta AG).