Share My Creation AI-AGENT Seller

Proud for my new Service-Product AI-AGENT Seller.

AI-Agent Seller is a great option for small to medium businesses that don't have the budget to hire a full-time sales team.

Some videos...




Meet your new AI employee by Magma Multimedia Productions. Designed for SMEs, it offers an affordable alternative to traditional hiring. Features include local or cloud installation, full API access, and custom developer tools for seamless integration.

Not selling the code, but i am having ears and hands for partnership... All Created with B4J, using some Python too (for local embedding/Using PyBridge), SQLite Databases with vector and FTS hybrid search !

Using GEMINI LIVE API for conversation, agent tools created for my ai agent by me.
 
Last edited:

aeric

Expert
Licensed User
Longtime User
Amazing!
Do you have an MCP server to connect to the inventory?
 

Magma

Expert
Licensed User
Longtime User
Amazing!
Do you have an MCP server to connect to the inventory?
hi!!
no I am using polytropic LLM ...
first converting speech to text (using internet)
then checking meaning (that can be also locally LLM)
then searching my db with vectors and the same time FTS (local or in vps)
then sending text to speech (internet)

but all these the same time (rec and speaking)...

and when the meaning of speeched polytropic has to with my agent tools then trigger them with select case...
 

aeric

Expert
Licensed User
Longtime User
then searching my db with vectors and the same time FTS (local or in vps)
I mean how the agent access the tools? What is the tools if it is not provided by MCP server? Is it direct database access?
 

aeric

Expert
Licensed User
Longtime User
then searching my db with vectors
I am a bit left behind in AI exploration.

I guess you are referring to vector database.
You mean the agent will access this vector database?
I lack of understanding of what it is but I think it is a different type of database design for AI.
Does this mean we need to keep this vector database up-to-date which is generated from another actual production database
or
use the vector database directly as the production database which stores all the inventory data?

My understanding is the vector database need to be generated with training data.
Let say the stock quantity is updated in real time when an item is sold.
This need to do frequently and means burning a lot of tokens?
 

Magma

Expert
Licensed User
Longtime User
I am a bit left behind in AI exploration.

I guess you are referring to vector database.
You mean the agent will access this vector database?
I lack of understanding of what it is but I think it is a different type of database design for AI.
Does this mean we need to keep this vector database up-to-date which is generated from another actual production database
or
use the vector database directly as the production database which stores all the inventory data?

My understanding is the vector database need to be generated with training data.
Let say the stock quantity is updated in real time when an item is sold.
This need to do frequently and means burning a lot of tokens?
LLM are already trained...
Vectors are Name of Products / Description / Big_Description (technical information) / urlofproduct => all these emdedding at insert the same time in vector (that cost if use perhaps Gemini - not much... 0.000001~2 - but you can also use local LLM for this - that is fast also with cpu/gpu) <- so this is a training for creating the maths needed for vector searching...

So when i will search the db - i will search also like blob (vector/maths/bytes) and the same time as FTS for accuracy.... the BLOBS are very good in meanings..
example: i need to drink something cold...
in my database simple keywords: cold... will return nothing
but the llm knows that coca cola goes at freeze, also milkshake.... so if turn them in vectors (cost at search too 0.0000002 - because i will convert the meaning) will return:
coca cola and milkshake and a ton of info big_description with LLM additions (talking too much)
but will not return Hot Tea... or Hot coffee...


5min of talking/converting/using LLM not cost more than 0.08euros ---> this cost can be -0.01-0.07 if using local LLM for embeddings (vector conversions)
 

aeric

Expert
Licensed User
Longtime User
My question is about the sync between inventory db and vector db, if any.
is it 1 or 2 dbs?
 

Magma

Expert
Licensed User
Longtime User

aeric

Expert
Licensed User
Longtime User
Please share the references or point to the websites for information and knowledge for these concepts such as Embedding and hybrid search.
Thanks.
 

Magma

Expert
Licensed User
Longtime User
Please share the references or point to the websites for information and knowledge for these concepts such as Embedding and hybrid search.
Thanks.
Hybrid search (i am saying that) is just because searching for vectors and FTS... (like saying for cars... petrol & gas = hybrid)

for Text Embedding you can search ChatGPT...
 

Magma

Expert
Licensed User
Longtime User
I will share some parts of code... soon...

I will start with sharing the Local "Embedding" - that is also very good example for using PyBridge
 
Top