Development

NestJS and Kafka

The Apache Kafka message queue easily integrates into NestJS servers, as described in the respective documentation. This article gives you a short overview of default communication behavior, some of the possible features and configurations and some good practices.

Overview of Basic Kafka Features in NestJS

NestJS Integration

  • Summary: NestJS facilitates the integration of Kafka, offering a streamlined approach for both message production and consumption within applications.
  • Default Behavior: Through the use of decorators and modules, NestJS abstracts much of the complexity involved in setting up Kafka clients, enabling straightforward message handling capabilities.
  • Configuration: Kafka can be configured at various points within a NestJS application, including during bootstrap for global settings, within the AppModule for application-wide settings, and within feature modules for localized settings. This flexibility allows for detailed control over consumer groups, error handling, and message retry strategies.
  • Bootstrap Initialization: Initializing Kafka in the bootstrap function is essential for setting up the Kafka microservice within the application context. It allows the app to connect to Kafka as a microservice. Global Kafka configurations, such as consumer group settings and error handling strategies.
    const app = await NestFactory.create(AppModule, …)
    app.connectMicroservice(kafkaOptions)
    await app.startAllMicroservices()
    await app.listen(4000, '0.0.0.0')
  • AppModule Initialization: You often see an application-wide Kafka configuration in the main AppModule, potentially overriding or complementing bootstrap settings. This makes no sense and is completely redundant with the anyway mandatory bootstrap-initialistion. So don’t do it.
    @Module({
      imports: [KafkaModule.register(kafkaOptions)]
    })
    export class AppModule {}
  • Feature Module Kafka Client Injection: Necessary when using kafka service injection to produce messages (this.kafka.emit(topic, data)) or when needing explicit control over the Kafka client in a specific module. When you’re only consuming messages using @EventPattern or @MessagePattern, without the need to explicitly produce messages within a service, the direct injection of Kafka (ClientKafka) might not be necessary. The @EventPattern and @MessagePattern decorators can be used in controllers or providers to handle incoming Kafka messages without the need for direct client injection.
    @Module({
      imports: [ClientsModule.register([{ name: 'kafka', kafkaOptions }])],
      providers: [XyzService],
      controllers: [XyzController],
      exports: [XyzService]
    })
    export class XyzModule {}
  • Service Injection: Kafka injection in a service requires the above mentioned featre module Kafka client injection. Then in the cnstructor of class XyzService, you can use the following pattern to get access to kafka client functions, namele this.kafka.emit. The name in
    @Inject('kafka') is arbirrary and must match name: 'kafka' in ClientsModule.register.

    constructor(@Inject('kafka') private kafka: ClientKafka) {}

One-to-Many Broadcast

  • Summary: Kafka’s model allows for broadcasting messages to multiple consumers. All consumers subscribed to a topic will receive messages sent to that topic.
  • Default Behavior: By default, all messages sent to a topic are broadcasted to all consumers subscribed to that topic.
  • Configuration: Configuration is managed at the consumer level by subscribing to topics.
  • Example:
    @Injectable()
    export class MyService {
      @EventPattern('myTopic')
      async handleBroadcastMessage(@Payload() message: any) {
        // Process message
      }
    }
    

Error Handling and Retries

  • Summary: In NestJS, unhandled exceptions during Kafka message processing lead to retries, affecting the message’s processing within its topic or potentially the entire client group if only a single processing thread is available.
  • Default Behavior: Throwing an exception in an event handler indicates to Kafka to retry the message. This may block further processing of the topic or the entire client group if it operates with a single thread.
  • Configuration: To manage retries and error handling more granularly, disable auto-commit and control offset commits manually, or use specific exceptions like `KafkaRetriableException` for controlled retry behavior.
  • Example:
    @EventPattern('requestTopic') handleRequest(data) {
      throw new Error() // This leads to a retry
    }
    
  • Good Practice Retry Pattern: Implementing a manual retry mechanism by re-emitting the failed message back to its topic can serve as a pragmatic approach to ensure that processing attempts continue without indefinitely blocking the queue. This pattern, however, is best suited for scenarios where message order is not paramount.
    @EventPattern('requestTopic') handleRequest(data) {
      try {
        // Perform the required processing
      } catch (e) {
        this.kafka.emit('requestTopic', data) // re-add to the back of the queue
      }
    }
    

Auto-Commit vs. Manual-Commit

  • Summary: Kafka supports both auto-committing offsets and manual offset management.
  • Default Behavior: Auto-commit is enabled by default, committing offsets at a configured interval.
  • Configuration: To switch to manual commit, disable auto-commit and manually manage offset commits.
  • Example:
    // Disable auto-commit
    consumerConfig = {
      ...consumerConfig,
      allowAutoCommit: false
    }
    

Consumer Groups

  • Summary: Kafka distributes messages among consumers in the same group, ensuring a message is processed once per group.
  • Default Behavior: Consumers in the same group share the workload of message processing.
  • Configuration: Different consumer groups can be set up to receive messages independently.
  • Example:
    const consumerConfig = {
      groupId: 'myUniqueGroup' // Unique group for independent consumption
    }
    

Historical Messages

  • Summary: New consumers can catch up with all missed messages since their last offset or from the beginning of the log.
  • Default Behavior: Consumers start consuming from their last known offset.
  • Configuration: Set auto.offset.reset to earliest to consume from the beginning if no offset is stored.
  • Example:
    const consumerConfig = {
      ...consumerConfig,
      autoOffsetReset: 'earliest'
    }
    

Return Value in @EventPattern and @MessagePattern

  • Summary: Return values in message handlers don’t influence the message flow in one-way communication patterns.
  • Default Behavior: Return values are generally ignored unless in a request-reply pattern.
  • Configuration: Implement explicit messaging for request-reply patterns.
  • Example:
    @MessagePattern('requestTopic') handleRequest() {
      // Process and return response
      return {data: 'response'} // this makes no sense
    }
    

Bidirectional Communication Pattern

  • Summary: Kafka primarily supports asynchronous communication, but can be configured for request-reply patterns by emitting to a previously agreed response topic.
  • Default Behavior: Asynchronous message broadcasting to multiple consumers.
  • Configuration: Use reply-to topics and correlation IDs for request-reply communication.
  • Example: The response is received by all consumer groups registered to the topic given in message.replyTo.
    // Producer sending a request
    this.kafka.emit('requestTopic', {
      data: 'request',
      replyTo: 'responseTopic'
    })
    
    // Consumer processing and replying
    @EventPattern('requestTopic') processRequest(message) {
      this.kafka.emit(message.replyTo, {
        data: 'response'
      })
    }
    
  • Example: To limit the response to be sent only to the same group as the request has been sent, you may add the group name in the request parameters and add it to the response’s topic. Be aware, that this is your convention not a security feature. Kafka offers Access Control Lists (ACL) if you need real access restrictions.
    // Producer sending a request with group id
    this.kafka.emit('requestTopic', {
      data: 'request',
      replyTo: 'responseTopic',
      consumerGroup: 'senderGroup'
    })
    
    // Consumer processing and replying
    @EventPattern('requestTopic') processRequest(message) {
      this.kafka.emit(`${message.consumerGroup}-${message.replyTo}`, {
        data: 'response'
      })
    }
    

Data Retention and Scaling

  • Summary: Kafka allows configurable message retention, supporting scalability by adding more consumers.
  • Default Behavior: Messages are retained for a default period, with scalability limited by topic partitions.
  • Configuration: Adjust retention settings and partition counts to scale and maintain messages as needed.
  • Example:
    # Kafka CLI to adjust retention period
    kafka-configs.sh --alter \
                     --entity-type topics --entity-name myTopic \
                     --add-config retention.ms=172800000
    

Good Practice in Microservices

A well-adopted design pattern in microservices architecture involves assigning each microservice its own unique group ID, ideally derived from the service’s name. This approach significantly benefits the scalability and reliability aspects of microservices, especially when deployed in cloud environments where multiple replicas of the same service might be instantiated to handle increased load or ensure high availability.

By default, assigning a unique group ID to each microservice ensures that messages are processed just once by one of the service’s replicas. This behavior aligns with the typical requirements of distributed systems, where duplicate processing of messages is undesirable. Should the processing of a message fail, resulting in an exception, the default Kafka behavior ensures the message is retried until successfully processed by one of the clients. This mechanism usually matches the desired behavior, it follows the requirements of the twelve-factor app and can be implemented effortlessly.

However, it’s crucial to recognize that the message queue may become stuck if an unresolvable error occurs, preventing further message processing. Therefore, it’s important to differentiate between recoverable and unrecoverable errors in your code. Unrecoverable errors often stem from coding mistakes or incorrect configurations. In such scenarios, rigorous testing of the software becomes indispensable.

Identifying and handling unrecoverable errors properly ensures that the system can degrade gracefully or alert the necessary operations personnel to intervene manually. Implementing robust error handling and logging mechanisms can aid in quickly diagnosing and rectifying such issues, minimizing downtime and improving the overall resilience of the microservices architecture.

In summary, careful consideration of group ID assignment, coupled with effective error handling strategies, lays the foundation for a scalable, reliable, and maintainable microservices ecosystem. Rigorous testing plays a crucial role in ensuring that the system behaves as expected under various conditions, thereby safeguarding against potential failures that could lead to message processing stalls.

Development

Separation of Style and Content — Why MUI Sucks

In the rapidly evolving world of web development, the ongoing debate over best practices for designing and structuring applications is more relevant than ever. One focal point of this debate is the practice of integrating styling directly within JavaScript components, an approach popularized by libraries such as Material-UI (MUI). MUI, along with similar frameworks, provides developers with a comprehensive suite of React components that conform to the Material Design guidelines, offering a seemingly quick path to prototyping and interface building. This convenience, however, may come at a significant cost, impacting not just code verbosity but also challenging the core web development principles of maintainability, scalability, and the crucial separation of content and presentation.

By blending the concerns of styling and logic within the same code constructs, such practices raise substantial questions about the long-term implications for web projects. While they promise speed and visual consistency out of the box, they necessitate a closer examination of how these benefits weigh against the potential for increased complexity and the dilution of foundational web standards.

LaTeX: A Standalone Beacon of Separation

LaTeX, a high-quality typesetting system, is a powerful exemplar of the importance of separating content from design. Originating from TeX, a typesetting system developed by Donald Knuth in the late 1970s, LaTeX was later extended by Leslie Lamport to make TeX more accessible and to support a higher level of abstraction. This evolution allows authors to focus solely on the content, freeing them from the intricacies of formatting. As a result, their work is presented consistently and professionally, with LaTeX handling the complex layout tasks invisibly. This separation ensures that the essence of the document remains distinct and untangled from its visual presentation, embodying the principle that good design should facilitate content, not obstruct it.

LaTeX is particularly revered in academic and scientific communities for its precision and efficiency in handling documents that contain complex mathematical expressions, bibliographies, and cross-references. It has become the de facto standard for many scientific publications, thesis documents, and conference papers. Its ability to produce publication-quality texts makes it an indispensable tool for researchers and academics worldwide, further showcasing the timeless value of distinguishing between the substance of one’s work and the manner in which it is visually rendered.

Office Templates: A Parallel in Document Writing

In the corporate world, the principle of separating content from its presentation finds a practical application through the use of templates in office suites such as Microsoft Office, Google Docs, and LibreOffice. These software solutions offer a variety of templates that empower users to concentrate on delivering their core message, while relying on pre-designed styles to ensure that documents adhere to a consistent and professional appearance. This functionality not only streamlines document creation but also elevates the quality of output by abstracting the complexities of design.

Despite the availability of these powerful tools, the effective use of templates remains underutilized in many business environments, leading to inefficiencies and a lack of standardization across documents produced within the same organization. The disparity between the potential for streamlined, professional document creation and the reality of inconsistent application underscores a broader challenge in corporate document management. But that’s a whole different story. Nevertheless, the concept of using templates as a means to separate content from presentation underscores a fundamental principle shared across fields ranging from digital publishing to web development: the true value of content is most fully realized when it is presented clearly and without unnecessary complication by design elements.

The Semantic Web: A Foundation Forgotten

The web has long embraced the principle of separation of concerns — a guideline advising that different aspects of application development, such as content, presentation, and behavior, be managed independently. This principle is not arbitrary; it is the culmination of decades of experience and evolution. From the early days of inline styles and table-based layouts to the adoption of CSS for styling, the web’s history is a testament to the ongoing effort to create more maintainable, accessible, and flexible ways to build the web.

The foundation of the web is built on HTML – a language designed to structure content semantically. This means that tags such as <button>, <header>, <article> or <footer> are not just stylistic choices but convey the meaning and role of the content they encapsulate. This semantic approach is vital for accessibility, search engine optimization, and maintainability.

CSS was introduced to separate the concerns of styling from content structure, allowing HTML to focus on content and semantics, and CSS to manage presentation. This separation is a cornerstone of web development best practices, ensuring that content is accessible and usable across different devices and by users with diverse needs.

The Pitfalls of Mixing Style and Content

Breaking Consistency

One of the strongest arguments against embedding style directly within components, as is common in MUI, is the risk to consistency. Components scattered across a project may be styled differently due to the variability of inline styling or prop-based design adjustments. This piecemeal approach can lead to a fragmented user interface, where similar elements offer differing user experiences.

High Maintenance Costs

While mixing design and content can expedite prototyping, it introduces significant long-term maintenance challenges. Styles tightly coupled with logic are harder to update, especially when design changes require navigating through complex component structures. This can lead to a bloated codebase, where updates are slow and error-prone.

The Designer-Developer Handoff

The collaboration between designers and developers is crucial to the success of any project. By mixing styles with component logic, we blur the lines of responsibility, potentially leading to confusion and inefficiencies. Designers are experts in creating user experiences, while developers excel at implementing functionality. The separation of concerns respects these specializations, ensuring that both can work effectively towards a common goal without stepping on each other’s toes.

The Problem with MUI’s Approach

MUI, while offering a rich set of components for rapid development, often blurs the lines between content structure and presentation. This is evident in the verbosity and explicit styling present within component definitions. Consider the following MUI example:

import React from 'react'
import Grid from '@mui/material/Grid'
import Typography from '@mui/material/Typography'
import Button from '@mui/material/Button'
import {Link} from 'react-router-dom'

function MyComponent() {
  return (
    <Grid container spacing={2}>
      <Grid item xs={12} sm={6}>
        <Typography variant="h1" gutterBottom>
          Welcome to My App
        </Typography>
        <Typography variant="body1">
          Get started by exploring our features.
        </Typography>
        <Button variant="contained" color="primary" component={Link} to="/start">
          Get Started
        </Button>
      </Grid>
    </Grid>
  )
}

In this snippet, the presentation details are deeply intertwined with the component’s structure. It is full of complexity, such as spacing={2}, xs={12}, sm={6} introduce arbitrary numbers without any context. The only reason for Grid and Typography elements is influencing the appearance, they have no semantics. This kind of pseudo-components should never be used. The properties spacing, xs, sm, variant, gutterBottom, color, and contained dictate the appearance directly within the JSX. This again violates the principle of separating style and content, leading to a scenario where changing the design necessitates modifications to the component code. So the react MUI library is the worst front-end library I have ever seen.

Advocating for a More Semantic Approach

Contrast the MUI example with an approach that adheres to the separation of concerns principle. Instead of mixing appearance and content, the full example above can be replaced by a simple standard HTML button within some semantic context, such as a navigation. First you either use an existing library, or you simply define your components, this is a sample for a clean and properly designed component:

import React from 'react'
import {Link} from 'react-router-dom' 

function ButtonLink({to, children}) {
  return <Link className='button' to={to}>{children}</Link>
}

Then you just use your component. Please note that outside of the definition of basic components, you must not use className or any other attribute that defines semantics or styling. Define base components for this purpose, then all remaining attributes, such as to, have a fully functional meaning. The resulting code is very clean and simple, so it is easy to read and maintain:

import React from 'react'
import {ButtonLink} from '@my/components'

function AppHeader() {
  return (
    <header>
      <p>Welcome to My App</p>
      <p>Get started by exploring our features.</p>
      <ButtonLink to='/start'>Get Started</ButtonLink>
    </header>
  )
}

Here you immediately see the content, so you can focus on the relevant parts.

For the look and feel, just apply some styling, which needs to be written only once in a central CSS style file, something like e.g.:

header {
  display: flex;
  justify-content: space-between;
}
button, a.button {
  color: white;
  background-color: blue;
  padding: 1ex;
  border: .1ex solid black;
  border-radius: .5ex;
  cursor: pointer;
}

In this simple example, CSS styles the layout inside of your <header>-tag, which replaces all that <Grid> and <Typography> nonsense, moreover the <button> tag and links in button style are both styled identically using CSS, ensuring that all button like elements across the application maintain a consistent appearance without requiring explicit style definitions in the code. This not only reduces redundancy but also aligns with the semantic nature of HTML, where the tag itself carries meaning.

Furthermore, thanks to the separation of styling and content, a designer can write the CSS and give you basic HTML layout rules, then the developers can focus on the content, instead of having to pay attention to the look and feel.

Please refer to our post Write a Common CSS Style Library for more details on how we suggest to structure your front-end libraries by separating styles from components, templates and content.

The Real Cost of Convenience

While MUI and similar libraries offer rapid development capabilities, they do so at the expense of long-term maintainability, scalability, and adherence to web standards. The explicit declaration of styles and layouts within JSX components leads to a verbose codebase that is harder to maintain and less accessible.

The additional typing and complexity introduced by such frameworks can obscure the semantic nature of the web, making it more challenging to achieve a clean, maintainable, and accessible codebase. This is contrary to all best practices and conflicts with the evolution of web standards, which have consistently moved towards a clear separation of content and presentation.

Embracing Standards for a Sustainable Web

The allure of quick development cycles and visually appealing components cannot be underestimated. However, as stewards of the web, developers must consider the long-term implications of their architectural choices. By embracing HTML’s semantic nature and adhering to the separation of concerns principle, we can build applications that are not only maintainable and scalable but also accessible to all users.

As the web continues to evolve, let’s not forget the lessons learned from its history. Emphasizing semantics, maintaining the separation of content and presentation, and adopting standards-based approaches are crucial for a sustainable, accessible, and efficient web.

Defending Separation of Style and Content

Critics of separating style from content may argue that modern web development practices, like CSS-in-JS, enhance component re-usability, enable dynamic styling, and streamline the development process by colocating styling with component logic. However, adhering to the separation of style and content principle offers significant long-term benefits. It enhances maintainability by allowing changes in design without altering the underlying HTML structure or JavaScript logic. This separation fosters accessibility and scalability, ensuring that websites and applications can grow and adapt over time without becoming entangled in a web of tightly coupled code. Additionally, it aligns with web standards and best practices, promoting a clear organizational structure that benefits developers and designers alike. By maintaining this separation, developers can leverage the strengths of CSS for styling, HTML for structure, and JavaScript for behavior, leading to a more robust, flexible, and accessible web.

For those inclined to integrate styling within React, an advisable approach is packaging styles into a dedicated Style and Component Library. This library should encapsulate the styling based on the Corporate Identity, allowing the actual code to utilize components devoid of additional styling. This methodology garners benefits from both paradigms. However, it’s crucial to note that this often falls short in meeting accessibility standards and restricts the styling’s applicability outside the chosen framework (e.g., React or Angular). In contrast, segregating styling from HTML via CSS and subsequently crafting components ensures technological independence, enabling the same styling to be utilized in diverse contexts like a PHP-based WordPress theme, showcasing its versatility across various platforms.

Development Pacta

Git Submodule from Existing Path with Branches

This article will show you how you can migrate an existing path within an existing project into a new project, then add that new project as submodule to the original project, while keeping all tags and branches. Typically, branches are lost in migration, but not with this little addition.

Fill a New Repository With a Complete Sub Path of Another Repository

You clone the original project, then filter to the path you want to extract, change the origin and push everything to the new origin. But with only that, this will not copy all the existing branches. Only if you first checkout all branches, then they will also be pushed to the new location. This is what the for-loop does for you.

git clone git@server.url:path/to/original-project.git
cd original-project
git filter-branch --tag-name-filter cat --subdirectory-filter path/to/submodule -- --all
for b in $(git branch -r); do
    git checkout ${b#origin/}
    git reset --hard $b
    git fetch
    git reset --hard ${b#origin/}
done
git remote remove origin
git remote add origin git@server.url:path/to/new-sub-project.git
git push origin --all

Now you have a new repository that contains only a part of the previous one.

Replace a Path with a Submodule

Next step is to replace the path in the old repository by the new repository as a submodule. Clone the original project again (delete the previous clone).

git clone git@server.url:path/to/original-project.git
cd original-project
git rm -rf path/to/submodule
git commit -am "remove old path/to/submodule"
git submodule add git@server.url:path/to/new-sub-project.git path/to/submodule

Now you have replaced path/to/submodule by a new submodule.

Pacta PagesServices and Pages by Pacta AG

Pacta.Cash

This is the easiest crypto wallet on the market. Manage your Ethers and Bitcoins securely without having to understand the technical details. You own the keys, all data is stored on your device. Trade without registration.

Pacta.Swiss

Company representation page of the Swiss Pacta Corporation Pacta Plc. This page is provided by Pacta Plc (in German: Pacta AG).