As most of us must already have known, the Linux world has grown to where it is because it has a bedrock of openness, collaboration and knowledge sharing as it’s foundations. The advent of AI though has come with algorithms and codes that are considered top secrets.
Open Source AI models on the other hand, can allow anyone to subject the models to scrutiny because the codes are unveiled in full glare of the public.
Does Open Source need AI hold the real key to building AI tools for the Linux community that we can rely on? Do we have the sufficient technical knowledge to scrutinize and correct open source AI models?
Absolutely, open source is at the heart of Linux, fostering innovation through community collaboration. Open-source AI models embrace this spirit, inviting scrutiny and improvement. It’s a powerful way to build trustworthy AI tools for Linux.
The community’s technical depth is vast, and while AI is complex, there’s enough collective expertise to scrutinize and enhance these models. It’s about leveraging the community’s strength to make AI more transparent and reliable.
This is also why many of us here - if not most - trust open-source Linux over say Windows or Mac.
Also see the article:
I believe that the Linux community adapting to the AI revolution should not dilute it’s open source nature by forging proprietary models. That would help us eliminate to a large extent algorithm bias in developing AI models. Most proprietary AI models are displaying that bias. Open Source AI models all the way.
We could trust Open Source AI models because they are well-developed by AI programmers. But they have a margin of error because they do not have human resonation and tricky answers could have wrong replies from them. Alternatively, you may do wrong manipulation when changing its algorithm so you did to do it with precaution.
@hydn @saoussen5765 i dont use the AI for programming as AI is helping a lot but if over used it will make programmers dumb also. So i stick to the normal approach, if i get stuck i try some alterations, read a book or a reference and then code back. This in turn enhances my knowledge and make me specific about answering questions.
Like no programming book teaches you how to code, similarly AI also have no specific answers, it is simply the compilation of the previous code from several places.
I am a self taught programmer.
I still believe that we have a formidable force of collective knowledge within our Linux community to craft AI tools that would meet our specific needs as a community while building upon the culture of trust and transparency that we already have. I don’t in anyway like traditional AI models with their secrecy and autonomy.
Personally I use ollama with open source language models that runs locally and preforming better
Btw this title is weird you always should trust any open source software over proprietary one
Good luck finding the one that suits you more.
Olama is available for Linux and Mac Os and coming soon for Windows release. I am waiting that they create a Windows release to try them locally. In the meantime, you could ullistrate what kind of projects you are using with ollama . Even did you use for your github or gitlub projects or for mobile developpment to have a clearer idea about it.
Agreed same, I’ve been running locally a lot. Will be interesting if/when Apple launches AI if it will be local-running on their chips or network required like Gemini, ChatGTP, etc.