Within the Writer Highlight collection, TDS Editors chat with members of our group about their profession path in knowledge science and AI, their writing, and their sources of inspiration. Right now, we’re thrilled to share our dialog with Mariya Mansurova.
Mariya’s story is one among perpetual studying. Beginning with a robust basis in software program engineering, arithmetic, and physics, she’s spent extra thanover 12 years constructing experience in product analytics throughout industries, from engines like google and analytics platforms to fintech. Her distinctive path, together with hands-on expertise as a product supervisor, has given her a 360-degree view of how analytical groups may help companies make the precise selections.
Now serving as a Product Analytics Supervisor, she attracts power from discovering recent insights and revolutionary approaches. Every of her articles on In the direction of Information Science displays her newest “aha!” second: a testomony to her perception that curiosity drives actual progress.
You’ve written extensively about agentic AI and frameworks like smolagents and LangGraph. What excites you most about this rising house?
I first began exploring generative AI largely out of curiosity and, admittedly, a little bit of FOMO. Everybody round me gave the impression to be utilizing LLMs or no less than speaking about them. So I carved out time to get hands-on, beginning with the very fundamentals like prompting strategies and LLM APIs. And the deeper I went, the extra excited I turned.
What fascinates me probably the most is how agentic methods are shaping the best way we reside and work. I imagine that this affect will solely proceed to develop over time. That’s why I exploit each probability to make use of agentic instruments like Copilot or Claude Desktop or construct my very own brokers utilizing applied sciences like smolagents, LangGraph or CrewAI.
Essentially the most impactful use case of Agentic AI for me has been coding. It’s genuinely spectacular how instruments like GitHub Copilot can enhance the velocity and the standard of your work. Whereas current analysis from METR has questioned whether or not the effectivity positive aspects are actually that substantial, I positively discover a distinction in my day-to-day work. It’s particularly useful with repetitive duties (like pivoting tables in SQL) or when working with unfamiliar applied sciences (like constructing an internet app in TypeScript). General, I’d estimate a few 20% improve in velocity. However this enhance isn’t nearly productiveness; it’s a paradigm shift that additionally expands what feels attainable. I imagine that as agentic instruments proceed to evolve, we’ll see a rising effectivity hole between people and corporations which have discovered how one can leverage these applied sciences and those who haven’t.
On the subject of analytics, I’m particularly enthusiastic about automated reporting brokers. Think about an AI that may pull the precise knowledge, create visualisations, carry out root trigger evaluation the place wanted, notice open questions and even create the primary draft of the presentation. That might be simply magical. I’ve constructed a prototype that generates such KPI narratives. And despite the fact that there’s a big hole between the prototype and a manufacturing resolution that works reliably, I imagine we’ll get there.
You’ve written three articles underneath the “Sensible Laptop Simulations for Product Analysts” collection. What impressed that collection, and the way do you assume simulation can reshape product analytics?
Simulation is a vastly underutilised software in product analytics. I wrote this collection to indicate folks how highly effective and accessible the simulations might be. In my day-to-day work, I preserve encountering what-if questions like “What number of operational brokers will we want if we add this KYC management?” or “What’s the possible affect of launching this function in a brand new market?”. You’ll be able to simulate any system, irrespective of how complicated. So, simulations gave me a option to reply these questions quantitatively and pretty precisely, even when laborious knowledge wasn’t but obtainable. So I’m hoping extra analysts will begin utilizing this strategy.
Simulations additionally shine when working with uncertainty and distributions. Personally, I want bootstrap strategies to memorising a protracted record of statistical formulation and significance standards. Simulating the method typically feels extra intuitive, and it’s much less error-prone in apply.
Lastly, I discover it fascinating how applied sciences have modified the best way we do issues. With at this time’s computing energy, the place any laptop computer can run 1000’s of simulations in minutes and even seconds, we are able to simply resolve issues that might have been difficult simply thirty years in the past. That’s a game-changer for analysts.
A number of of your posts deal with transitioning LLM functions from prototype to manufacturing. What frequent pitfalls do you see groups make throughout that section?
By apply, I’ve found there’s a big hole between LLM prototypes and manufacturing options that many groups underestimate. The most typical pitfall is treating prototypes as in the event that they’re already production-ready.
The prototype section might be deceptively easy. You’ll be able to construct one thing useful in an hour or two, check it on a handful of examples, and really feel such as you’ve cracked the issue. Prototypes are nice instruments to show feasibility and get your crew excited concerning the alternatives. However right here’s the place groups typically stumble: these early variations present no ensures round consistency, high quality, or security when going through various, real-world situations.
What I’ve discovered is that profitable manufacturing deployment begins with rigorous analysis. Earlier than scaling something, you want clear definitions of what “good efficiency” appears like by way of accuracy, tone of voice, velocity and some other standards particular to your use case. Then it’s good to monitor these metrics constantly as you iterate, guaranteeing you’re really enhancing reasonably than simply altering issues.
Consider it like software program testing: you wouldn’t ship code with out correct testing, and LLM functions require the identical systematic strategy. This turns into particularly essential in regulated environments like fintech or healthcare, the place it’s good to reveal reliability not simply to your inner crew however to compliance stakeholders as effectively.
In these regulated areas, you’ll want complete monitoring, human-in-the-loop assessment processes, and audit trails that may stand up to scrutiny. The infrastructure required to assist all of this typically takes way more growth time than constructing the unique MVP. That’s one thing that persistently surprises groups who focus totally on the core performance.
Your articles generally mix engineering rules with knowledge science/analytics greatest practices, reminiscent of your “High 10 engineering classes each knowledge analyst ought to know.” Do you assume the road between knowledge and engineering is blurring?
The position of a knowledge analyst or a knowledge scientist at this time typically requires a mixture of expertise from a number of disciplines.
- We write code, so we share frequent floor with software program engineers.
- We assist product groups assume by means of technique and make selections, so product administration expertise are helpful.
- We draw on statistics and knowledge science to construct rigorous and complete analyses.
- And to make our narratives compelling and really affect selections, we have to grasp the artwork of communication and visualisation.
Personally, I used to be fortunate to achieve various programming expertise early on, again at college and college. This background helped me tremendously in analytics: it elevated my effectivity, helped me collaborate higher with engineers and taught me how one can construct scalable and dependable options.
I strongly encourage analysts to undertake software program engineering greatest practices. Issues like model management methods, testing and code assessment assist analytical groups to develop extra dependable processes and ship higher-quality outcomes. I don’t assume the road between knowledge and engineering is disappearing totally, however I do imagine that analysts who embrace an engineering mindset might be far simpler in fashionable knowledge groups.
You’ve explored each causal inference and cutting-edge LLM tuning strategies. Do you see these as a part of a shared toolkit or separate mindsets?
That’s really an awesome query. I’m a robust believer that each one these instruments (from statistical strategies to fashionable ML strategies) belong in a single toolkit. As Robert Heinlein famously mentioned, “Specialisation is for bugs.”
I consider analysts as knowledge wizards who assist their product groups resolve their issues utilizing no matter instruments match the perfect: whether or not it’s constructing an LLM-powered classifier for NPS feedback, utilizing causal inference to make strategic selections, or constructing an internet app to automate workflows.
Fairly than specialising in particular expertise, I want to deal with the issue we’re fixing and preserve the toolset as broad as attainable. This mindset not solely results in higher outcomes but additionally fosters a steady studying tradition, which is crucial in at this time’s fast-moving knowledge trade.
You’ve coated a broad vary of matters, from textual content embeddings and visualizations to simulation and multi AI agent. What writing behavior or tenet helps you retain your work so cohesive and approachable?
I normally write about matters that excite me in the meanwhile, both as a result of I’ve simply discovered one thing new or had an fascinating dialogue with colleagues. My inspiration typically comes from on-line programs, books or my day-to-day duties.
After I write, I at all times take into consideration my viewers and the way this piece might be genuinely useful each for others and for my future self. I attempt to clarify all of the ideas clearly and go away breadcrumbs for anybody who needs to dig deeper. Over time, my weblog has develop into a private data base. I typically return to previous posts: generally simply to repeat a code snippet, generally to share a useful resource with a colleague who’s engaged on one thing related.
As everyone knows, all the pieces in knowledge is interconnected. Fixing a real-world downside typically requires a mixture of instruments and approaches. For instance, when you’re estimating the affect of launching in a brand new market, you would possibly use simulation for state of affairs evaluation, LLMs to discover buyer expectations, and visualisation to current the ultimate advice.
I attempt to mirror these connections in my writing. Applied sciences evolve by constructing on earlier breakthroughs, and understanding the foundations helps you go deeper. That’s why lots of my posts reference one another, letting readers comply with their curiosity and uncover how totally different items match collectively.
Your articles are impressively structured, typically strolling readers from foundational ideas to superior implementations. What’s your course of for outlining a fancy piece earlier than you begin writing?
I imagine I developed this fashion of presenting info at school, as these habits have deep roots. Because the guide The Tradition Map explains, totally different cultures differ in how they construction communication. Some are concept-first (ranging from fundamentals and iteratively transferring to conclusions), whereas others are application-first (beginning with outcomes and diving deeper as wanted). I’ve positively internalised the concept-first strategy.
In apply, lots of my articles are impressed by on-line programs. Whereas watching a course, I define the tough construction in parallel so I don’t neglect any necessary nuances. I additionally notice down something that’s unclear and mark it for future studying or experimentation.
After the course, I begin eager about how one can apply this data to a sensible instance. I firmly imagine you don’t actually perceive one thing till you attempt it your self. Though a lot of the programs have sensible examples, they’re typically too polished. So, solely while you apply the identical concepts in your personal use case will you run into edge instances and friction factors. For instance, the course would possibly use OpenAI fashions, however I would wish to attempt a neighborhood mannequin, or the default system immediate within the framework doesn’t work for my explicit case and desires tweaking.
As soon as I’ve a working instance, I transfer to writing. I want separate drafting from modifying. First, I deal with getting all my concepts and code down with out worrying about grammar or tone. Then I shift into modifying mode: refining the construction, choosing the proper visuals, placing collectively the introduction, and highlighting the important thing takeaways.
Lastly, I learn the entire thing end-to-end from the start to catch something I’ve missed. Then I ask my associate to assessment it. They typically deliver a recent perspective and level out issues I didn’t contemplate, which helps make the article extra complete and accessible.
To be taught extra about Mariya‘s work and keep up-to-date together with her newest articles, comply with her right here on TDS and on LinkedIn.