Microservices With AngularJS, Spring Boot, and Kafka – by DZone

Microservices architecture has become dominant in technology for building scalable web applications that can be hosted on the cloud. Asynchronous end-to-end calls starting from the view layer to the backend is important in a microservices architecture because there is no guarantee that the containers which receive the calls will handle the response.

In addition, Websocket gives the user a new experience to communicate directly with user who open the browsers and can send offers, promotions and chat directly. I think web application should have both sync and asynchronous calls, synchronous calls are used for reading data from logical views and asynchronous calls used for back-end transactions. As shown in the following diagram:

Image title

Architecture Components

1-ViewLayer

The view layer is an AngularJS application with a SockJS library hosted on an Apache Tomcat deploy on Docker.

The view page register to stomp topic:

Controller js

angular.module("app.mi", ['app.common', 'ngStomp'])
    .controller("miController", ['$scope', 'GetJSonMI', 'wsConstant', '$stomp', function ($scope, GetJSonMI, wsConstant, $stomp) {
        emps = { MobileNum: wsConstant.MobileNum };
        $scope.UserData = emps;
        var loadingText = "loading ..."
        $scope.mobileInternet = {usbMsisdn:loadingText, balance: loadingText, sallefny: loadingText, ratePlan: loadingText, consumedQouta: 0, totalQouta: 0}
        $scope.adsText = "Stay tuned for new offers ..."
        $stomp
            .connect('/poc-backend/ws', {})
            // frame = CONNECTED headers
            .then(function (frame) {
                var subscription = $stomp.subscribe('/user/topic/mi', function (payload, headers, res) {
                    $scope.mobileInternet = payload
                    $scope.$apply();
                }, {})
                var adsSubscription = $stomp.subscribe('/topic/ads', function (payload, headers, res) {
                    $scope.adsText = payload.content
                    $scope.$apply();
                }, {})
                // Send message
                $stomp.send('/app/mi', {
                    username: $scope.UserData.MobileNum
                }, {})
            })
    }])

Docker file

FROM httpd:2.4
COPY ./public-html/  /usr/local/apache2/htdocs/
COPY httpd.conf /usr/local/apache2/conf/httpd.conf
RUN chmod 644 /usr/local/apache2/conf/httpd.conf

2-API Gateway

Which is the Webservice that communicates with AngularJS. This API gateway uses stomp and REST protocol. This microservice, developed by Spring Boot, acts as a producer and consumer in a separate thread.

1-Producing  Message: Send message to Kafka broker on topic 1

2-Consuming Message: Listen to the incoming messages from Kafka on topic 2

For consuming there is a pool of threads that will handle the incoming messages and execution service which listen to Kafka stream and send assign the message handling to a thread from the pool.

Image title

Main Class:

@SpringBootApplication
@Configuration
@ComponentScan
@EnableAutoConfiguration
@EnableWebSocketMessageBroker
public class MicroserviceWebApplication extends AbstractWebSocketMessageBrokerConfigurer {
    public static void main(String[] args) {
        System.setProperty("spring.devtools.restart.enabled", "true");
        SpringApplication.run(MicroserviceWebApplication.class, args);
    }
    @Override
    public void configureMessageBroker(MessageBrokerRegistry config) {
        config.enableSimpleBroker("/topic");
        config.setApplicationDestinationPrefixes("/app");
    }
    @Override
    public void registerStompEndpoints(StompEndpointRegistry registry) {
        registry.addEndpoint("/ws").setAllowedOrigins("*").withSockJS();
    }


Controller Class:

    @MessageMapping("/mi")
    public void getMiDetails(User user, @Headers Map<String, Object> headers) {
        logger.info("getMiDetails(user: {}): starts ...", user);
        if(user == null || user.getUsername() == null || user.getUsername().isEmpty()) {
            logger.info("getMiDetails(user: {}): user name is empty, ignoring the request.", user);
            return;
        }
        String sessionId = SimpMessageHeaderAccessor.getSessionId(headers);
        if(sessionId == null || sessionId.isEmpty()) {
            logger.info("getMiDetails(user: {}): no websocket session found, ignoring the request.", user);
        return;
    }
    sessions.put(sessionId, "");
    try {
        String messageText = mapper.writeValueAsString(new MiRequest(UUID.randomUUID().toString(), sessionId, user.getUsername()));
        logger.info("getMiDetails(user: {}): sending request to message broker...", user);
        messageProducer.send(miRequestsTopic, messageText);
        logger.info("getMiDetails(user: {}): done.", user);
    } catch(JsonProcessingException ex) {
        logger.error("getMiDetails(user: {}): failed, an error occured while parsing the request to json.", user, ex);
    }
}

Docker file:

FROM java:8
VOLUME /tmp
RUN mkdir /app
ADD uicontroller-service-0.1.0.jar /app/app.jar
ADD runboot.sh /app/
RUN bash -c 'touch /app/app.jar'
WORKDIR /app
RUN chmod a+x runboot.sh
EXPOSE 8080
CMD /app/runboot.sh

3-DAO Microservice

I call this microservice DAO microservice . It handles DB transactions and produces/consumes on Kafka broker. It doesn’t have a controller.

Main Class:

@SpringBootApplication
public class MicroserviceBackendApplication{
    public static void main(String[] args) {
        SpringApplication.run(MicroserviceBackendApplication.class, args);
    }
}

Service Class:

@Service
public class UserServiceImpl implements UserService {
    private static final Logger logger = LoggerFactory.getLogger(UserServiceImpl.class);
    @Autowired(required=true)
    UserRepository userRepo;
    @Autowired(required=true)
    MobileInternetRepository miRepo;
    @Override
    public User getUserData(String user,String password) {
        User usr=userRepo.findOneByMsisdn(user);
        return usr;
    }
    @Override
    public Optional<MobileInternet> getUserMi(String msisdn) {
        logger.info("getUserMi(msisdn: {}) starts...", msisdn);
        Optional<MobileInternet> result = miRepo.findByUserName(msisdn);
        logger.info("getUserMi(msisdn: {}) done, result: {}", msisdn, result);
        return result;
    }
}

Docker file :

FROM java:8
VOLUME /tmp
RUN mkdir /app
ADD backend-service-0.1.0.jar /app/app.jar
ADD runboot.sh /app/
RUN bash -c 'touch /app/app.jar'
WORKDIR /app
RUN chmod a+x runboot.sh
CMD /app/runboot.sh

Docker-compose file:

version: '2'
volumes:
  postgres-data:
services:
# static-web
  poc-static-web:
    image: static-web
    hostname: poc-static-web
    expose:
    - "80"
    ports:
    - "80:80"
    depends_on:
    - poc-backend
# Web
  poc-web:
    image: uicontroller-service
    hostname: poc-web
    expose:
    - "8090"
    ports:
    - "8090:8090"
    links:
    - "poc-backend"
    - "poc-kafka"
    - "poc-postgres"
    depends_on:
    - poc-backend
    - poc-kafka
    - poc-postgres
# Backend
  poc-backend:
    image: backend-service
    hostname: poc-backend
    links:
    - "poc-kafka"
    - "poc-postgres"
    depends_on:
    - poc-kafka
    - poc-postgres
# Apache Kafka
  poc-kafka:
    image: spotify/kafka
    ports:
    - "9092:9092"
    - "2181:2181"
    hostname: poc-kafka
    expose:
    - "9092"
    - "2181"
    # environment:
    #     ADVERTISED_HOST: '0.0.0.0'
    #     ADVERTISED_PORT: '9092'
# Database
  poc-postgres:
    image: postgres
    ports:
    - "5432:5432"
    hostname: poc-postgres
    expose:
    - "5432"
    volumes:
       - "postgres-data:/var/lib/postgresql/data"

Design Consideration:

1-Persist socket connections

All the web socket connections should be persistent in memory DB to keep the session in case of container failure. When the container is up it should read all the disconnected websockets and try to establish it again.

2-Persiste the DB using clustering.

3-Using Async calls for DB transactions and sync requests for DB query. For sync requests, API gateway calls DAO microservices.This Architecture pattern can be applied as event sourcing (CQRS) approach which persists all the transaction events in Casandra big table and create logical views for data query.

The source code is available here.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s