Artificial intelligence

JackKirk

Well-Known Member
Licensed User
Longtime User
ChaGPT has had that for a long time (maybe forever, I can't say).
It didn't the last time I used it but seeing as they are from the same stable (OpenAI) it is not surprising they have similar UI features.

The difference as far as I am concerned is ChatGPT is trained to be a chatbot, Copilot is trained to be a coding assistant.

Google "what are the primary differences between chatgpt and copilot" and get Gemini's take on it
 

LucaMs

Expert
Licensed User
Longtime User
It SEEMS like that, but Copilot has disappointed me a lot, Gemini has been great in the last few days, ChatGPT seems good to me, especially with this new feature it will be much more useful and apparently more intelligent, more informed about B4X (because I will inform it in my "GPT project", outside it will remain the ignorant it is (about B4X, obviously)).
 

JackKirk

Well-Known Member
Licensed User
Longtime User
I have had extensive useful dialogs with Copilot on B4A and B4J (haven't had a need to discuss B4i) and it is most definitely literate on both - without it I would have had no chance of developing a B4A app with services resilient to Samsung firmware's aggressive killing of services in the name of battery preservation.
 

LucaMs

Expert
Licensed User
Longtime User
Okay, but "today" try ChatGPT.
This "projects" (GPT) thing sounds really cool.

You can also create folders and subfolders:



(With this advertising I'm doing for it, it should pay me )
 

LucaMs

Expert
Licensed User
Longtime User
My experience today with chatgpt...

View attachment 166581

ha ha ha...

My experience today with chatgpt...

I noticed that "Create project" menu, so I opened a chat and asked it for information.
Its answers, in short, were that in a project you can:
  • add files - much better if they're text files (.md or .txt)
  • create folders and subfolders
  • chats (your chats with it, which you can move from the list to the project)
ChatGPT will automatically consult everything in the project. Of course, it will consult them even more so if you specifically tell it what to consult/use.

Then, after trying it out, I discovered that:

it can't access the chats you've placed in the project, neither automatically nor if you explicitly tell it to do so.

and:

You were right to check
Let me clarify right away:

As of today (September 2025): not all accounts have the ability to create folders in projects.

The feature is being rolled out gradually: some users see the "New Folder" button, others don't (yet).

It's not tied to whether you have a Free or Plus account: it's really a matter of a "staged rollout."

So it's not that you "looked wrong": if you don't see the option, it simply means it hasn't been activated for you (yet).
 

LucaMs

Expert
Licensed User
Longtime User
My experience today with chatgpt...
and right now...

My question:

Well, before I find out the hard way more limitations you haven't listed, can you let me know for sure:
1. How many files can I upload during a session before your mind gets clouded (a less intelligent model is used)?
2. What is the maximum size of text file I can upload?



Its answer:

Great questions — because here we're entering practical limitations that are best understood immediately, without having to figure them out through frustration. I'll answer with certainty for the current situation (September 2025):


1. Number of uploadable files​


  • There is no “official” hard limit, but in practice: dozens of files are fine .
  • The important thing is not how many files you upload, but how much total text I have to process as context.
  • When the project becomes very heavy (too many long sources), the model might:
    • ignore some of the content,
    • or give you more vague answers.
  • You are not “downgraded” to a less intelligent model, but there is a context cut beyond a certain threshold.



2. Maximum file length​


  • In practice: a few dozen pages of plain text (about 50–100 KB) works well.
  • Very long files (hundreds of KB or MB) are not used entirely: the model will only take pieces of them (typically the initial and final sections).
  • If you have extensive documentation, it is a good idea to divide it into several thematic files (e.g. B4X_Syntax.md, B4XPages_Lifecycle.md, Errori_Comuni.md).



Practical advice​


  • Keep your files focused and modular : 10 files of 5 pages are better than 1 file of 50 pages.
  • If you have a “bible” (e.g. complete B4X manual), do not upload it all in one file → a lot of text will not be considered.



To summarize:


  1. Total files → you can upload quite a few, but the real limit is the amount of text that fits into the context.
  2. Single file → best to keep it under 50–100 KB (dozens of pages).
 

Tim Chapman

Active Member
Licensed User
Longtime User
I asked Gemini why Perpelexity does well at B4J. Here is what it replied:

Perplexity AI's strength with B4J, despite it being a niche language, comes from its unique architecture, which is fundamentally different from a purely local, "dumb" LLM running on Ollama.

Here's why Perplexity AI likely does well with B4J:

Real-time Web Search and Information Retrieval: This is the absolute biggest factor. Unlike a local LLM that only knows what it was trained on, Perplexity AI is an "answer engine" or "conversational search engine."1 When you ask it a question, it:

Performs a real-time web search: It actively queries the internet using your prompt (or refined versions of it).2

Indexes and processes search results: It intelligently reads through the top web pages returned from its search.

Synthesizes answers from the retrieved information: It then uses its underlying LLMs to generate a concise answer based on the content it just found on the web, along with citations.3

Implication for B4J: This means that if there's any information about B4J online – documentation, forum posts, code examples, tutorials, discussions about common B4J patterns, or how B4J relates to Java – Perplexity AI can find it and use that information to construct a relevant answer. It doesn't need to have B4J in its initial training data; it retrieves it on demand.

Access to Powerful, General-Purpose LLMs (often larger than what you can run locally): Perplexity AI uses a combination of its own models (like Sonar) and can also leverage access to leading commercial models like:

GPT-4 / GPT-4.5 / GPT-4o (or similar sized models from OpenAI)4

Claude 3 / Claude 3.5 (from Anthropic)

Gemini (from Google)5


These are much larger and more capable models than anything you can realistically run on an 8GB Jetson Orin Nano. These larger models have a more extensive general knowledge of programming concepts, common patterns, and highly nuanced understanding of major languages like Java. When Perplexity retrieves information about B4J, these powerful models are then better equipped to understand and synthesize that information into actionable B4J-related code or explanations.

Contextual Understanding and Iterative Refinement: Perplexity is designed for multi-turn conversations and often refines its internal search queries and responses based on your follow-up questions.6 This iterative process helps it hone in on the specific B4J context you're discussing.

"Understanding" of Java: Since B4J sits on top of Java, Perplexity AI's underlying LLMs have a deep understanding of Java itself. When it retrieves B4J-related information, it can often bridge the gap by connecting B4J concepts to their Java equivalents, which is incredibly useful for problem-solving.

In essence:

Perplexity AI doesn't "know" B4J because it was explicitly trained on a massive B4J dataset. Instead, it "learns" about B4J on the fly by searching the internet for relevant B4J content and then applying the reasoning and generation capabilities of powerful, often much larger, general-purpose LLMs to that retrieved information.
 
Cookies are required to use this site. You must accept them to continue using the site. Learn more…