Ntropy

Ntropy

Software Development

Start building with financial data.

About us

Ntropy is the most accurate financial data standardization and enrichment API. Any data source, any geography.

Website
https://ntropy.com
Industry
Software Development
Company size
11-50 employees
Headquarters
New York
Type
Privately Held
Founded
2019
Specialties
artificial intelligence, data privacy, data, fintech, lending, categorization, enrichment, machine learning, and business underwriting

Locations

Employees at Ntropy

Updates

  • View organization page for Ntropy, graphic

    2,740 followers

    Our CEO Naré Vardanyan shares why bringing the costs down for LLM inference while guaranteeing performance and reliability is still one of the biggest problems to solve in this decade. Using Ntropy API today to classify billions of transactions, you are getting SOTA model performance, reliability and cost-efficiency like no where else. We are talking 3-4 orders of magnitude. It is not about being the best model any more. It is about delivering you the best output at any point in time, reliably and fast, at a cost that makes economic sense.

    View profile for Naré Vardanyan, graphic

    Chief Executive @Ntropy. Throwing way too many GPU-s at all my problems.

    In the last ten to twelve months, as a company and as founders and individuals in the LLM space, Ilia Zintchenko and I have been talking about cost of inference a lot. We also have been working on making sure you can bring that cost down at scale for highly valuable, high throughput and complex use cases where bigger models outperform small and specialized ones. Today, looking at recent performance of SOTA models that are better yet cheaper and faster, many will extrapolate that inference is solved and the costs are trending down. Competition artificially pushes the costs down even more and many companies are still bleeding money on this, trying to get batch sizes in the longer run. However, our argument stands and there is a massive problem to solve here still. Here is why 1. We have not seen major reasoning leaps in new models yet , and all the compute build-up means scaling this architecture is still on the map. Current LLM-s are good enough for many things , yet have a long way to go. There is a very high probability that the next model that leaves GPT4 in dust not just by shaving off a few benchmark percentage points here and there, but by seriously unlocking things that were not possible before, is going to be larger and much more expensive to run. 2. The cheaper the inference gets, the higher the demand, hence more things become possible that were economically prohibitive before, expanding this market even further. There is a reason to make things that are becoming cheaper already, even more accessible. 3. Finally, the most valuable LLM outputs and products are going to be context dependent and very recursive (e.g. agentic workflows and interactions) hence they require many LLM calls back and forth meaning that the trend of inference becoming cheaper has a smaller net effect on those bills. We need orders of magnitude faster and cheaper options here, while not sacrificing accuracy and while ensuring reliability. How's your Saturday going?

  • Ntropy reposted this

    View profile for Naré Vardanyan, graphic

    Chief Executive @Ntropy. Throwing way too many GPU-s at all my problems.

    This year there have been a lot of reports and a ton of chatter re the ROI for the compute spend. So at Money20/20 with NVIDIA and Amazon Web Services (AWS) we are going to debunk the myths and talk real use cases, run demos, show end to end results and obviously sprinkle some magic ! Join us at the Pavilion ! Ntropy and Ntropians will be at the booth from 9am daily! Obviously we could not make this happen if our customers didn’t build awesome products on top of our APIs , Amazon Web Services (AWS) and NVIDIA Accelerated compute! Cascading AI (YC S23) Scribe Novo Airbase 🚀🚀🚀🚀

  • View organization page for Ntropy, graphic

    2,740 followers

    Building finops and for CFOs? Here’s what we have learnt

    View profile for Naré Vardanyan, graphic

    Chief Executive @Ntropy. Throwing way too many GPU-s at all my problems.

    We sell to a lot of companies, who sell to CFO-s. Below are a few things we have learnt 👇 Procurement, finance, spend management and revenue ops are prime areas to build for, as they are high leverage and filled with manual processes. After all, there are two main tools that are being used in modern enterprise. The IDE in terms of writing code (VS Code, Emacs, more recently Cursor etc), and Excel in terms of all non-engineering work. The third contender is the inbox. Anything you do in the workplace evolves around these, hence this wave with enterprise LLM-s is going to be disrupting those/ building processes on top of them. Let's zoom into Excel though. - Excel is Turing Complete (with VBA). This flexibility allows finance to tell stories for various audiences. Strategic finance teams are great at telling stories and love it. If you are replacing Excel and are offering pre-built automations without allowing finance to break the abstractions down and tinker, you will lose. They love their formulas, because they have control. They can tell amazing stories to the level of granularity they want to. Now you can technically embed series of multi-step workflows in Excel too. The opportunity to replace Excel here is much bigger. - Running a multi-step process in Excel with multiple stakeholders, data entry, data reconciliation, is a pain. Excel as a database and orchestration environment sucks, compared to being a story telling tool. It is too prone to errors, there are no proper access controls. Collaboration is not native: there are problems with version controls, it is hard to track changes and keep audit trails. Given the above, if you offer workflow automations that are scaleable, easy to integrate with existing systems and have a robust data infrastructure with controls, you have the CFO-s ear. The easier it is to use for others, the better. The easier it is to audit and find / fix errors, the better. There is a caveat. They will buy it, if they do not have to get the team to learn how to use it. Teams that run migrations for the CFO suite end up winning. Even if it is forward deployed. Foundation models, and abstractions built on top of them, like reasoning chains and tool use/ integrations are ripe to re-imagine how finance run process and disrupt Excel from this angle. It is exciting is how custom this can get and how much opportunity there is for unstructured business data to be leveraged, particularly vertical by vertical, where there is certain context that was until now locked in the experiences and skills of domain experts only. Our tips if you are building for the space - Replace Excel for process, not for story telling - Leverage contextual data to the max ( nail salon businesses have different needs to construction) - Focus on cost of migration and implementation. If you have to take it on, do it. Happy Friday !

    • No alternative text description for this image
  • Ntropy reposted this

    View profile for Ilia Zintchenko, graphic

    CTO @ Ntropy

    Prompt caching, just launched by Anthropic, is a must-have if you are running high-throughput tasks with LLMs. Lets unpack: - it is effectively the same as Gemini's "Context caching" that was launched 2 months ago - makes fine-tuning obsolete for all, but some niche cases - unlocks up to 10x reduction in cost for high-throughput applications where the ratio between (static input tokens) vs. (dynamic input tokens + output tokens) is large - we can call prompt caching (or "context caching") LEVEL 2. By only requiring *parts* of the prompt to be the same as a previous query, it goes one step further than the most basic LEVEL 1 input caching, where the *full* prompt has to be exactly the same as a previous query. - at Ntropy we have had an even more advanced LEVEL 3 caching setup in production since early this year, which has enabled us to scale to massive LLM volumes at an even lower cost than running smaller specialized models in-house. - with LEVEL 3 caching, even queries that at first glance look different, but which have fundamentally similar structure can share the same cache bucket. This requires figure out structural similarities between the *variable* parts of different prompts. We use an approach similar to locality-sensitive hashing with hashing models that are specialized for each individual task. - LEVEL 2 and LEVEL 3 caching can be combined to achieve orders of magnitude cost and latency reduction over raw model queries and are key to make the largest LLMs viable for high-value, high-throughput tasks in the real world. So much more to come 🚀🚀🚀

  • Ntropy reposted this

    View profile for Naré Vardanyan, graphic

    Chief Executive @Ntropy. Throwing way too many GPU-s at all my problems.

    There are many reasons why code generation and creating a software engineering agent is the holy grail for so many companies. Software engineering is a classic example of a high leverage activity, but is very expensive. Most code is buggy, gets old, needs to be updated and maintained, is hard to audit. The consequences of bad code are far reaching and can even result in serious disruptions. So now we have three components that matter: - Very high utility, ie directly leads to a lot of value being created - Expensive and growing demand for it with low supply, meaning there is not enough humans to write all the code that needs to be written. - Needs to be constantly maintained and updated. Often cost of maintenance is higher than the design of the solution itself. Because of the above, software as a service took off in the last decades. Instead of hiring humans to write expensive code, then audit and debug it for every single activity you want to perform via software, you could suddenly pay for the code and its constant deliverability in a certain narrow area. This is how we end up with so many vendors and SaaS indices. Now, if the cost of writing code goes down and instead of 30M developers, we have hundreds of millions of software development agents, imagine the amount of process that can be automated and efficiency that can be introduced. The second order implications of this are wild. But I am not writing about that today. If you are a founder, trying to find the right "skills" or "tasks" to disrupt with LLMs the framework behind SE is very viable. - Needs to have enough utility, yet a lot of human scarcity behind it - Needs to be expensive to perform with humans. The demand needs to be growing vs shrinking, which means cost drops will activate further demand, following Jevons paradox - Maintenance and update take equal amount of time and cost, making the demand continuous If your task contains these properties and the person you are augmenting / automating processes for has a gun to their head to get things done, you are "golden". Software engineers, recruiters, GTM roles, accountants and finally lawyers are obvious places to go . Lawyers are interesting. At first glance it seems they can bill less hours hence their business model will be challenged. On the other hand, the entities that can suddenly afford first class legal advice will grow drastically in number, multiplying the billable hours, even if the business model never changes. Accounting and financial workflow automations are a no brainer here. Finance - Has high utility and is very expensive. You cannot do anything, if you do not do the finances - Shrinking supply of financial professionals and rising demand. - Bringing the costs down means the demand will rise further, allowing anyone to have access to real-time intelligence. - Are repetitive, hard to maintain and audit If you are building financial automations, talk to us at Ntropy.

  • View organization page for Ntropy, graphic

    2,740 followers

    Understanding unstructured data for SME-s has been impossible at scale making them highly unprofitable to lend to and invest in . This is changing thanks to better data + LLM-s. Check out what our partners at Validis have to say about it

    View organization page for Validis, graphic

    8,477 followers

    We’ve partnered with Ntropy – Specialists in the standardization & enrichment of transaction data💰✅    Large language models (LLMs) are redefining efficiency across the accounting and finance sectors.    Picture this 💭- A trained AI assistant that processes data and manages the heavy lifting, so you don’t have to, sounds ideal, doesn’t it?    But there's a problem💡 – LLMs are only as good as the data they are fed. Even the smallest of data inaccuracies can produce disastrous results.    By using ValidisNtropy, can tap into fully standardized accounting data – helping their users understand the core truth of their business customers.    Together we're joining up the dots to help professionals in the worlds of accounting, audit, tax and lending see the full picture and serve their clients intelligently, at pace and at scale 🚀    Thank you to the team behind this partnership Naré Vardanyan, Michael Turner, Ronan O'Dea & Mo Awadalla, CFP®🤝   Check out the full post from Ntropy – Linked in the comments ⬇️ #WeGetFinancialData #API #FinTech #AccountingTech #Ecosystem #Partnership #DataDrivenAudit #DataDrivenLending  

  • View organization page for Ntropy, graphic

    2,740 followers

    How are you evaluating vendors , how long does it take you, what matters? As the sales stack is changing, so is procurement and how we manage the many vendors. Guess what you need to streamline a vendor management solution and get on top of it? Yes, the Ntropy API, and reliable, clean and accurate transaction information. Our CEO shares her thoughts below re AI native sales vendors and how the space is shaping up!

    View profile for Naré Vardanyan, graphic

    Chief Executive @Ntropy. Throwing way too many GPU-s at all my problems.

    Mo Awadalla, CFP® and I are on a mission to introduce more automation to our sales motion. As a super lean team at Ntropy , selling to large enterprises, we did a deep dive into tooling. Will update the note, as we make final decisions and get to use some tools in the longer run. For now, some uncensored views on where b2b sales are going. I have shared notes on some new providers and categorized them. What really excites me intellectually is thinking about how pricing is evolving and how the process of buying has to change to balance out how selling is changing. Would love to hear your thoughts and what your AI native sales stack looks like, if you have one already.

    Deep dive into LLM powered sales tools: reviews, thoughts on pricing  + second order effects

    Deep dive into LLM powered sales tools: reviews, thoughts on pricing + second order effects

    Naré Vardanyan on LinkedIn

  • Ntropy reposted this

    View profile for Naré Vardanyan, graphic

    Chief Executive @Ntropy. Throwing way too many GPU-s at all my problems.

    Recenlty there have been many thought pieces on LLMs and what AI can and cannot do and where it is useful . If you are building with AI, breaking through the noise is key. Join Ilia Zintchenko at Fintech devcon today, where he shares notes on deploying LLM-s at scale in financial services and touches upon bottlenecks such as cost, speed, reliability, compliance and how we are working to overcome them. One of the fascinating things about this technology is that its parity with the ability of humans in certain areas pushes many to apply it to things it is not necessarily fit for yet. There is a lot of economic value lying just in front of us where it is a perfect fit and is going to change anything. Even if these models never got better from this spot, that value is up for grabs. But they are getting better. The cycles will take longer with the data centre buildout, but the reality is that we have not seen anything make a bigger difference to the quality of the models than scale. So we shall scale. In the meantime, let us push some intelligence to prod !

    • No alternative text description for this image
  • Ntropy reposted this

    View profile for Naré Vardanyan, graphic

    Chief Executive @Ntropy. Throwing way too many GPU-s at all my problems.

    Years ago, the first Macintosh had the ambition of talking to us and we found it cute. Hello world ! Now we have machines that can actually talk back and do commands in English. And guess what is one of the first areas that is going to be disrupted by this? Accounting, Tax and audit. This is a very unique opportunity where we are seeing a ton of adoption with LLM-s. One of the main reasons is the declining talent pool of human professionals and the increasing demand in these services. So many companies are re-thinking the whole space. We have event had inbound outreach from government bodies trying to address tax credit fraud with automation because they do not have enough humans to audit. Accounting data is a core source of truth to understand businesses. LLM-s have unlocked what you can do with this data , from CPA assistants and co-pilots to automated tax filing and loan underwriting and finally delivering real-time intelligence for your finance team. But first, you need to get the data and make sure it is of high quality. Data normalization and ingestion from bank ledgers into accounting platforms is still very manual, context dependent and demands high precision. We are solving this. Once you have the better data, your further output with LLMs can be more reliable and grounded. Today we are partnering with Validis , one of the top providers of accounting data, to bring bank and accounting information to the LLM age. Companies like Barclays and Santander, Grant Thornton (US), Withum and Deloitte already rely on Validis for access to accounting feeds. The announcement blog with more details is in the comments. If you are building in the space, do reach out ! Super excited for this chapter and getting to work with Ronan O'Dea, Michael Turner and the rest of the Validis team !

Similar pages

Browse jobs

Funding

Ntropy 4 total rounds

Last Round

Series A

US$ 11.0M

See more info on crunchbase