Can we trust Open Source AI over proprietary AI models?

As most of us must already have known, the Linux world has grown to where it is because it has a bedrock of openness, collaboration and knowledge sharing as it’s foundations. The advent of AI though has come with algorithms and codes that are considered top secrets.

Open Source AI models on the other hand, can allow anyone to subject the models to scrutiny because the codes are unveiled in full glare of the public.

Does Open Source need AI hold the real key to building AI tools for the Linux community that we can rely on? Do we have the sufficient technical knowledge to scrutinize and correct open source AI models?

1 Like

Absolutely, open source is at the heart of Linux, fostering innovation through community collaboration. Open-source AI models embrace this spirit, inviting scrutiny and improvement. It’s a powerful way to build trustworthy AI tools for Linux.

The community’s technical depth is vast, and while AI is complex, there’s enough collective expertise to scrutinize and enhance these models. It’s about leveraging the community’s strength to make AI more transparent and reliable.

This is also why many of us here - if not most - trust open-source Linux over say Windows or Mac.

Also see the article:

1 Like

I believe that the Linux community adapting to the AI revolution should not dilute it’s open source nature by forging proprietary models. That would help us eliminate to a large extent algorithm bias in developing AI models. Most proprietary AI models are displaying that bias. Open Source AI models all the way.

We could trust Open Source AI models because they are well-developed by AI programmers. But they have a margin of error because they do not have human resonation and tricky answers could have wrong replies from them. Alternatively, you may do wrong manipulation when changing its algorithm so you did to do it with precaution.

1 Like

@hydn @saoussen5765 i dont use the AI for programming as AI is helping a lot but if over used it will make programmers dumb also. So i stick to the normal approach, if i get stuck i try some alterations, read a book or a reference and then code back. This in turn enhances my knowledge and make me specific about answering questions.

Like no programming book teaches you how to code, similarly AI also have no specific answers, it is simply the compilation of the previous code from several places.

I am a self taught programmer.

Alles Gute,
Gaurav

1 Like

I still believe that we have a formidable force of collective knowledge within our Linux community to craft AI tools that would meet our specific needs as a community while building upon the culture of trust and transparency that we already have. I don’t in anyway like traditional AI models with their secrecy and autonomy.

Personally I use ollama with open source language models that runs locally and preforming better
Btw this title is weird you always should trust any open source software over proprietary one
Good luck finding the one that suits you more.

1 Like

Olama is available for Linux and Mac Os and coming soon for Windows release. I am waiting that they create a Windows release to try them locally. In the meantime, you could ullistrate what kind of projects you are using with ollama . Even did you use for your github or gitlub projects or for mobile developpment to have a clearer idea about it.

1 Like

Agreed same, I’ve been running locally a lot. Will be interesting if/when Apple launches AI if it will be local-running on their chips or network required like Gemini, ChatGTP, etc.

1 Like

Actually, i have been reading more than AI code completion is actually making it difficult and also making it slow as it can produce the code and then you need to spend a lot of time debugging it and thinking about the approaches. In my case, i dont use the same as i have all aspects different and i dont say that generate a code for me. But for learning something new and asking for explanation of code is enough as suggested here https://www.youtube.com/watch?v=tjFnifSEEjQ

I started to use this Editor recently with no GPT enabled. The only good point is that it is sleek, faster than vscode in bootup times, have more support for the text files rather than jupyter notebooks, faster and these are all the best features of this.

VScode becomes sluggish after having too many plugins, wont start, so i shifted to VSCodium and now with Fedora i only use this. Totally new setup in the last few days :smile: From Ubuntu to Fedora, From VScode, VScodum to Cursor Editor and this is much useful.

sorry for typos, just corrected.

Thank you Gaurav

1 Like

Trusting AI is like throwing a pebble in water and asking that will it come back. This is the same as if someone says look AI suggest likes and dislikes, i would say go to AI :smile: as that has nothing to do with me. Freaks trust everything without evaluating, normal people dont. Think how can you trust this, it is fake.

Take an example, you ask AI a code and you run it without seeing it and it produces wrong results. What you will say it is fake. Same if you trust fake without seeing or asking, you are generating fake results and believing that it is also fake, so it means wasting time of yours and others. I dont support that. With fake, you generate more fake.

My reply, sorry i am not interested in that as i have the liberty to ask and see the person and i am more interested in the same rather than AI suggestions and believing fake.

Gaurav

1 Like