<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[LokSuvidha Engineering]]></title><description><![CDATA[Fintech for the next billion]]></description><link>http://engineering.loksuvidha.com/</link><generator>Ghost 3.42</generator><lastBuildDate>Wed, 04 Mar 2026 18:17:02 GMT</lastBuildDate><atom:link href="http://engineering.loksuvidha.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Airflow - With a twist]]></title><description><![CDATA[<p>For the uninitiated, airflow is a data processing framework, a glorified cron scheduler, if you will, but as most python developers might put it...</p><!--kg-card-begin: markdown--><blockquote>
<p>With batteries included</p>
</blockquote>
<!--kg-card-end: markdown--><p>We have two instances of airflow running at Lok Suvidha. This article covers, the more interesting use case. I'll be following up with</p>]]></description><link>http://engineering.loksuvidha.com/airflow/</link><guid isPermaLink="false">5ff7bb817bae2d0001d21fd5</guid><dc:creator><![CDATA[Saurabh]]></dc:creator><pubDate>Tue, 02 Feb 2021 02:51:15 GMT</pubDate><content:encoded><![CDATA[<p>For the uninitiated, airflow is a data processing framework, a glorified cron scheduler, if you will, but as most python developers might put it...</p><!--kg-card-begin: markdown--><blockquote>
<p>With batteries included</p>
</blockquote>
<!--kg-card-end: markdown--><p>We have two instances of airflow running at Lok Suvidha. This article covers, the more interesting use case. I'll be following up with another post for the second instance. </p><h2 id="why-the-prologue-">Why? The prologue...</h2><p>To understand the problem, we first have to be familiar with a bit of financial jargon. Every month, we receive loan EMIs from our customers. The way this works is, our customers have signed a mandate, with us, giving us the authority to deduct EMI from their bank account. Thankfully we have NPCI and NACH which allows for centralized processing with every bank. </p><p>Every month, we have this activity where we <em>present</em><strong> </strong>emi collection request for our customers. This is done by generating a list of customers from whose bank account the emi is to be collected, sending this list to our <em>sponsor bank</em> and waiting for the bank to credit the amount from the customer's bank account to our bank account. This happens at a bulk level, ie, for each customer, we don't receive a single amount credited, we receive a bulk amount for all the customers. For example, for 10 customers for whom we are awaiting credit, each having an emi of 10Rs, the total amount is 100Rs, we will receive 100Rs and not 10Rs 10 times in our account. Additionally, some customers may even bounce ie. have insufficient funds in their bank accounts for us to debit. So eventually out of the 100Rs that were to be received we might end up getting 90Rs. This is an important point, which will come up later. Now, when we receive the credit, there are several actions that need to be taken - a. Mark received/bounced against a customer. b. Apply bouncing charges if any. c. Reconcile, the bulk payment received from the bank. ie, check if the payment received in our account, is equal to the sum total of credits, the bank has given us against each customer.</p><p>Now, we come to the technical part. The entire activity is a huge, SQL transaction. This <em>all or nothing</em> approach keeps the system in a consistent state. Now, with transactions, all sorts of things can go wrong. Especially, when you have several transactions, all happening at once, some will conflict, others will timeout. We have faced this in the past where sometimes the entire DB got locked, and then the team went on a scavenger hunt to find the transaction that's blocking. </p><!--kg-card-begin: markdown--><h2 id="howthedatewithairflow">How? The date with Airflow...</h2>
<!--kg-card-end: markdown--><p>So we want to solve the classic deadlock problem. Classic problems, also have classic solutions, and the immediate one is with a Queue. So Kafka? Right?</p><p>Turns out, the answer lies in the context, and our experiences thus far. Deadlocks come in all shapes and sizes. So do bugs. No matter how resilient your systems are, one must prepare for eventual failures.  Additionally, if one is failing fast, one should also recover fast.</p><p>The Kafka approach would be to keep two partitions, one is the main queue and the other is the failure queue. If something goes awry in the main queue, it is moved to the failure queue and requeued after the bug is fixed, or deadlock is resolved after a delay. It is a relatively complex architecture which might require additional plumbing(logging, runtime tracking etc) to get right correctly.</p><p>In comes airflow. The two most important features that were of utmost importance were, the airflow-rest-api and BashOperator. Now akin to our previous example, the flow is:<br>1. Upload the bank clearing file to storage.<br>2. Start the processing job, via a GET call to the airflow server and by passing a json configuration object.<br>3. A bash job is started with the configuration object being passed as a parameter to the script.</p><p>Yes, we are using airflow as simply a task queue with none of its scheduling capabilities. Why? because just the tooling around it is so good! We have clear visibility into logging, alerting mechanisms are baked right into it, with a task parallelism of 1, we ensure no other transaction gets in between.</p><p>Soon, we extended this into a framework, whereby all things accounting are pushed through this "queue". There's a router script at the very front which, depending on the configuration object, routes the code flow. Essential, and long running tasks like day end etc have been moved here.</p><!--kg-card-begin: markdown--><h2 id="keepitsimplesweet">Keep it simple sweet</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><blockquote>
<p>I had a Problem, so I decided to use java, now I have a ProblemFactory</p>
</blockquote>
<!--kg-card-end: markdown--><p>The key takeaway, and our north star deciding our choices is the desire to keep the code simple, linear and predictable. We have dealt with complex MVCs and ORMs that jump over 10 different files to perform a simple SELECT query. It is okay to have complex external tooling that keeps watch over the pipeline. However, introducing code that has nothing to do with business logic and is only there for squashing bugs that came up because of a really cool asynchronous callback driven  architecture is not really, <em>cool.</em></p>]]></content:encoded></item><item><title><![CDATA[The road so far...]]></title><description><![CDATA[<p></p><p>At LokSuvidha Finance, we are at the edge of a fintech revolution. Never has there been such excitement around a boring product such as finance. BFSI, usually, is last sector that embraces bleeding edge technologies, since they are well, bleeding. Additionally, when you deal with other person’s money, it</p>]]></description><link>http://engineering.loksuvidha.com/the-road-so-far/</link><guid isPermaLink="false">5ff3f6287bae2d0001d21fbf</guid><dc:creator><![CDATA[Saurabh]]></dc:creator><pubDate>Tue, 05 Jan 2021 05:17:49 GMT</pubDate><content:encoded><![CDATA[<p></p><p>At LokSuvidha Finance, we are at the edge of a fintech revolution. Never has there been such excitement around a boring product such as finance. BFSI, usually, is last sector that embraces bleeding edge technologies, since they are well, bleeding. Additionally, when you deal with other person’s money, it brings with it a sense of responsibility and accountability and “moving fast and breaking things” is not really a guiding principle that we can work with. And yet, we as a tech-first company, cannot accept this, for complacency is the death of innovation. To bridge together, this culture of agility in tech and yet serve our customers, with a consistent experience is a narrow road few would dare to ride on.</p><p>We are a small company, compared to yesteryear’s giants of banking institutions. This has helped us, in part for being thin and agile and most importantly, swift. Swift in reacting to market changes, swift in fixing problems, swift in identifying opportunities and even letting go of decisions that might have not been fruitful. We have been learning a lot along the way as we scaled and technology has been the protagonist in this play. Led by the vision of driving innovation, we are a small engineering team of 6 engineers who believe in breaking the traditional process models. We do not believe in buzzwords and trends, or the classic trap of hitting every nail if you have a hammer. We believe in choosing the right tool for the job and being practical about it. We also believe in Free and Open Source Software, each and every one of our employee, even the non-IT staff uses linux. This was easier to do since we do not use any desktop specific software, everything runs on the browser. Our complete tech stack is built upon the massive learnings of the open-source community and their contributions. This helps us keep our costs low and removes any biases or blind spots that come with buying a proprietary product. We have experienced that proprietary products come with their own problems and dependencies. Keeping things simple and predictable, has helped us diagnosing issues faster and providing immediate fixes.</p><p>Process has been our guiding tool for our system designs. Right from the UI, to the intricate logic that helps us collaborate with our partners, it hasn’t been a road without friction. Having a system with the flexibility of introducing ad-hoc subsystems on-demand has been one of the key reasons for our agility. This cannot happen without a strong process driven system in place. Having a process, can also sometimes introduce bureaucracy, other times, it is a result of working with legacy systems. One such example is NACH, which has improved exponentially, especially with the government push for digitization. We moved from physically sending the NACH form, to the bank, to sending an image and now with the advent of E-NACH, there is no use of physical image anymore. Each one of these, requires radically different approaches to designing the process, one takes days while the other happens in an instant.</p><p>We believe in automating mundane mechanical work away. Humans should be in charge of taking decisions and nothing more. We believe in making quality of life improvements for our operations team. We have faced several practical challenges in this journey, especially when it comes to rural context. Customers having unclear documents and data inconsistencies makes it difficult to do away with complete automation, and hence a hybrid approach becomes necessary. Eventually, our team only checks document parameters which couldn’t be verified by the system. Examples of documents that fall into these categories are handwritten receipts, signatures etc.</p><p>Another aspect where automation has helped us is to interface our systems with our lending partners. How do you ask a giant like ICICI bank to technically onboard us? Why should a large NBFC like L&amp;T Finance build a software layer, for a single partner? Just the integration would take months or even years and priorities are always debatable. Time taken is business loss. We took what we had from our partner, their mobile app, their web portal and built a bot to punch in data of customers to their system, completely transparent to teams on both sides. This needed a hacktivist approach, disassembling the workings of a software that was alien to us, then assembling the puzzle into our processes while ensuring this did not break the flow with our other partners. The bots work in tandem with our teams, syncing data and moving documents like an assembly line. This has helped us reduce our redundant workload tremendously.</p><p>Data sciences, the choice of letting the data drive decisions, has been vital to our tech acceleration. Right from building a custom scorecard, keeping in line with the rural context to closing in on frauds without hampering the experience of a legitimate customer, is entirely data driven. This has helped us in reducing turnaround time, ultimately improving customer experience. Another consequence has been the removal of biases, decisions taken are objective and uniform across the board. Firebolt, our reports and dashboarding system, helps our teams keep a close watch on the pulse of the organization. Be it sales, collections or operations, every team member is given real-time KPIs to monitor, this has helped us to identify bottlenecks and optimize key hotspots. Data is a lamp-post, it should be used for illumination rather than support for one’s own biases. Having this dearth of data, shared across a multidisciplinary team ensures that when subjectivity creeps into decisioning, all perspectives are catered to.</p><p>Technology brings in scale, the virtual world is much more malleable, elastic and hyperconverging when it comes to bringing people together. It is easier to align views and drive actions faster when variables are reduced and clarity permeates through the organization. This can only be achieved by building a technology stack that leads decisions, even in times of uncertainty. We have been extremely fortunate to stand at the crossroads where a lot has changed in a relatively small amount of time. As we move forward, we strongly believe agility, more than anything will be conquering disruption every day and we intend to adapt and disrupt.</p>]]></content:encoded></item></channel></rss>