AirCTO is revolutionising the tech recruitment industry by bringing the power of cutting edge technology into recruiting. Our main focus in on A.I and the problems it can solve in recruitment.

Over a period of 3 years, AirCTO as a product has scaled a ton with various features. Our technology has managed to keep the pace with the product changes very well. Since the beginning, we made sure that system architecture is scalable and tried our best to keep the different systems, independent as much possible.

Here is a snapshot of our current system architecture :

System Architecture of AirCTO

Woah! Too many things happening here?

Let's go through each pieces to understand it better.

Batman

Batman is our API layer to all the interfaces we have for our customers, candidates, experts (interviewers) and our internal sourcing team. It is the heart of our entire systems.

Batman is written in Go on top of PostgreSQL database. The database runs on AWS RDS. Batman handles all user requests from different frontend apps. Some of the things it performs are:

  1. Job description and candidate management
  2. Chatbot orders and tracking
  3. Automated interview scheduling, conducting (Phone & Video calls) and reporting. (We use Knowlarity for Phone and TokBox for video calling.)
  4. Invoice management for our customers
  5. and many more nifty features..

It also brings all the common features like user login, subscription, token management under one umbrella. This accelerates our development process and allows us not to solve the same problems again and again.

Batman ❤ 

Batman also has search candidates functionality which is powered by ElasticSearch. It also has NSQ to manage time-consuming tasks like interview scheduling, uploading resumes et al.

Parser and Crawler

We have three different micro-services which enables us to Parse and Crawl candidate information, JD information etc.

1. Resume Parser: Takes the resumes of the candidates and extract information such as name, email, phone number, skills, experience, education, work experience and many more data points. It has its own database (PostgreSQL) to help us re-establish the record just in case of an issue.

2. JD parser: Takes the raw JD content and classifies into job category, skills    required, years of experience, location and many more data points.

3. Spiderman (Crawler): Takes the supported sites url and crawls the page to get user and job information based on the query.

All these micro-services are written in Python and Batman talks to them through REST API over private network.

Communication Protocols

Most of our services interact with each other through REST APIs. REST has been our primary architecture style.

But we also use gRPC to interact between Batman <> Jarvis. We have noticed gRPC has been more performant than REST.

Frontend Systems

We have over 6 different frontend apps offering interfaces to 6 different stakeholders:

  1. Gotham: Our customer facing application that empowers them manage their candidates.
  2. Mystique: Our marketing website which has mix of static and dynamic content.
  3. Fury: Our expert focussed website where they conduct interviews with the candidates and share their feedback on them.
  4. Magnifier: This is a simple interface that allows candidate to quickly go through MCQ tests shared by our internal sourcing team.
  5. Discover: It is a simple interface for our internal sourcing team to search candidates from our database (Batman) based on experience level, skills, companies, education, gender etc.
  6. Gaffer: It's an internal CRM built for our Sourcing and Sales team to manage orders, candidates, experts, interviews et al.

All these frontend systems has been built using NodeJS, ReactJS and Redux running on AWS EC2 instance.

Jarvis: A.I Chatbots

Everything that happens around Conversational Bot at AirCTO is Jarvis. It is the core of AI Chatbot services at AirCTO. Jarvis has been written in Python and R. It has its own database (Vandal data layer).

We have got three different implementation for Jarvis: FAQ Chatbot, HR Chatbot & Screening Bot.

FAQ Chatbot

We have created FAQ Chatbot to automate the interaction that happens between candidates and the recruiters. Our internal sourcing team shares a unique chat link of a job & company information with potential candidates. FAQ Chatbot responds to candidate's query regarding job/company even if it's in the mid-night. This makes our sourcing team more efficient and helps them to get candidates faster than ever.

This is powered by Rasa NLU and Spacy.

HR Chatbot

It allows our customers to setup questions with logic jumps which they would like to ask their candidates that can help them shortlist the candidates faster. We mange logic jumps using Graph and Paths, and handle cyclic dependency using Directed Acyclic Graph (DAG).

You can give it a try – hope it doesn't disappoint you. :)

Screening Bot

We have got a screening layer which has question banks of programming languages that is shared with the candidates to screen them on certain skills.

At AirCTO, we use adaptive tests to measure the ability of the candidate in a specific domain (Python, Front end development, Docker etc,.). The adaptive test is based on a statistical model in psychometrics using Item Response Theory.

The Jarvis – AI Chatbot systems

Deployment

We dockerize all our apps and host our code repository on Github. Every deployment on Staging and Production goes through CI systems (we use self-hosted Jenkins) where it makes sure that every build passes successfully. We do write test cases and we wish to achieve 90% code coverage. Once the build succeeds and the code review gets done then it deploys the code on the cloud server.

We use AWS and Google Cloud for production and staging deployments respectively. We have dedicated  `aircto.in` as our staging domain - to avoid any confusion for the development and testing teams.

Logging & Monitoring

We use Papertrail, Bugsnag to manage our app logs and Pingdom to monitor it.

The entire system is spread across multiple AWS EC2 instances. We neatly document our APIs and business logic and keep it updated, often. ;)

We also use Kafka for our candidate discovery. We will write a separate article for it soon. :)

Ending notes..

There is no such thing as an ideal system architecture. It only gets better over time and persistence. Therefore, we have a lot to do still to improve our systems and we consistently try to catch up with changing things.

How did you like our system architecture? Feel free to share your feedback.

PS: Do share this article if you like what we're doing.