Q: How did you end up at Vave?
A: I came to the UK in 2019 to do my Master’s in data science at Royal Holloway, University of London. Prior to this, I had a couple years of work experience as a Data Scientist for an analytics consultancy in India. I have always been interested in working with data and my background in computer science made me really keen on getting a role within this field after my degree. A few months after I graduated, I joined Vave as a data scientist. Since then, I have found it very easy to settle in the team and have worked on some riveting projects.
Q: What are the best aspects of your job?
A: Being a data scientist, my natural instinct is to look at data for actionable insights that can help the business. At Vave, data is at the core of our decision-making, so data science projects that we work on add real value to our team and aid us in achieving our goals. This also means that there are endless possibilities and use cases for the data that we harbour and are collecting on a daily basis. This makes my work very interesting as I get to work on some really amazing projects which is also a very good learning experience for someone who has recently entered the insurance industry.
Q: What are you working on right now?
A: I am currently focusing on insurance claims data which allows innovative analytics solutions that will support our underwriting methodologies. We have claims data coming in from different Third Party Administrators (TPA’s) and it is critical to streamline the data collection and storage process for us to be able to perform any kind of analysis on it. Once the data is structured, we use dashboards with intuitive visualizations to monitor the changes and identify potential trends. This project has also given me an opportunity to work on natural language processing which was one of my preferred subjects during my Master’s degree.
Q: What is your next big project on the horizon?
A: The next big project will be to analyse risks data to develop a model that will help us determine our propensity to bind a risk. This will involve a lot of data cleaning and transformations along with using statistical testing to get the best out of the machine learning algorithm. The scope of this project is not limited to building a cutting-edge model with solid evaluation metrics, but also to understand what data points influence/drive a risk to be bound by us. This will be very helpful in making recommendations to the existing pricing algorithm from an optimization perspective.