**Please note that donations are not needed at this time, ignore the funding goal. We care more about your feedback!
Leading research that aims to bring quantum theory into the AI field unlocking new horizons. By doing that, we aim to eradicate intractability that opposes major breakthroughs and boost creativity redeeming innovation at scale. Our motivations are ambitious as we trying to push the boundaries beyond the early adaption phase and democratize AI, making it merely accessible to everyone around the globe. We are looking to involve the community as well, so everyone takes part in this revolutionary work.
In case you're curious, our vison entails the following:
-We're fully aware that transformers are near retirement, and they can no longer carry the torch forward. Hence, we look to build a new architecture based off the combination of existing novel approaches.
-We look to simulate and manipulate quantum concepts to align with our circumstances to alter our technological present boosting computation at scale by offering cost-effective solutions.
-Ultimately, if everything goes well, we intent to apply those findings into the build of an advanced digital entity. A secure multi-modal agent that ultimate mission is to serve people while keeping them engaged in their tasks. Indeed, that has to do with a lot of "ethics and regulations" metrics that I think are essential to foster a safe and productive machine-human collaboration in the long term.
-Last but not least, I was triggered by the deceptive hype flooding the industry, hence, I decided to embrace this journey. We willing to cut through the unnecessary clout and meaningfully engage AI in people's lives, which only mission is to help humanity thrive and prosper in the age of unprecedented technological advancement.
The project's goals are twofold:
1-Simulating quantum theory, particularly superposition and entanglement, compressing the data into "state vectors".
2-As result we will look to build a multi-disciplinary digital entity with a superficial cognitive system at scale.
1) First, how are we going to apply quantum physics theories into AI?
We aim to leverage two fundamental quantum concepts—entanglement and superposition—to enhance our AI models. Our objective is to adapt these concepts to achieve significant breakthroughs in AI, particularly in natural language processing (NLP).
Entanglement: This concept will be utilized to predict tokens that are distant from each other within our dataset simultaneously. This approach will significantly reduce computational time and resources. Entanglement, in this context, is contingent on superposition. Essentially, entanglement cannot be effectively implemented in training models unless the data is superposed.
Superposition: This involves the correlation of tokens. "Main Words" should be represented by dense, multidimensional vectors that encompass the entire lexicon related to them. For instance, if the model encounters the word "vehicle" in a sequence, it should be capable of generating related words such as "car," "bus," "Ford," "Tesla," etc., regardless of their distance from the "main word" in the dataset. This should apply to all words within the same lexical field. The data should be pre-embedded in this manner, allowing the model to perform an initial scan and generate all related words in a structured approach. This is not about prediction but about generating a structured set of words from the dataset.
Subsequently, based on the semantics learned and the correlations between words, the model should predict "context," meaning full phrases and sentences rather than individual tokens. This can be thought of as "filling the gaps." Initially, this might require human intervention and supervised learning. However, we are confident that, with sufficient training samples, the model will become proficient in predicting contexts autonomously.
To achieve these objectives, we plan to employ a hybrid neural network architecture. This architecture will consist of tensor networks, specifically Matrix Product States (MPS), for encoding, and Multi-Layer Perceptrons (MLP) for training and decoding. Tensor networks have shown formidable utility and great potential in handling high-dimensional data, as discussed in the referenced paper (https://arxiv.org/pdf/1306.2164v3). This hybrid approach is inspired by the robust capabilities of tensor networks in managing complex data structures, which aligns well with our goals of leveraging quantum concepts in AI.
2) We looking to build a multi-disciplinary entity that is rational, dynamic, and fun so you can interface with it and have real time speech-to-speech discussions. It can serve as an assistant, mentor, or simply a best friend with whom you argue a lot but still cherish.
To achieve this, we will need to go through two crucial steps inspired by the "DIFFUSION MODELS ARE REAL-TIME GAME ENGINES" paper.
1: Train the agent in an interactive environment with given parameters. In our case, the parameters will be the following: the conversation's dynamic memory content, the rendered screen pixels, the agent's rendering logic, and the agent's logic given the user's input. Our environment here is a speech-to-speech interface where different users will be holding a variety of discussions with the agent. This is when the community is going to be much needed playing a crucial role into developing a diverse dataset from scratch.
2: The entity has a body. Hence, we will need to produce numerous animations, body movements, gestures and facial expressions. Then, we will stitch them to their equivalent speech data and create speech-image pairs.
3: The community interactions with the agent in our exclusive environment alongside the animations will be collected as a data upon which a generative model will be trained. This generative model will likely be text-to-image diffusion model that we will re-purpose and condition on both speech and emotions. We will implement a sophisticated dual-channel input system, combining speech and emotions embeddings using cross-attention.
1: This work will not entail mere application of quantum theory, but rather inspirations that are subject to further calibration and refinement to adhere to our goals, resources, and most importantly steer away from fantasy yielding legitimate and factual results.
2: I've been called a clout-chaser and delusional before for claiming AGI is highly within reach, and I mean it. And that its delay is explicitly due to other constraints than just its absurd "conceptual and technical execution". I do not promise anything, but I do hope for something that has the flair of "AGI" to emerge upon this work.
A portion of the funds will be primarily leveraged to fund a full-time team and acquire a decent office to ensure consistent operation.
Get the necessary computational resources to empower our experiments and run trials at scale.
Collaborate with leading labs to bring in their expertise to the table, making sure we stay on the right track.
Encourage the community to stay engaged with our speech model. Crucial step to secure a large and multifaceted dataset.
To be announced! Currently in talks with several people. We are already 5 people researching and running initial trials on our available modest hardware (This is primarily focused on bridging a couple of different architectures together and running some ablation studies).
I'm trying to get the best to join, so it might take a little bit of time.
If you're interested in joining the team, you can DM me on X.
If this project doesn’t pan out, there are a few specific challenges we might face:
One likely cause could be the difficulty of practically applying quantum concepts like superposition and entanglement within our AI framework. While the theory is promising, effectively representing and manipulating dense multidimensional vectors could prove more complex than we anticipated. We may also struggle with extreme computational overhead, particularly in managing the high-dimensional data during training, which could lead to inefficiencies in our model.
Another potential issue might arise from our interactive training environment. As we engage the community and gather diverse conversational data, we might find that the range of inputs isn’t as varied as we’d hoped. This lack of diversity could limit the richness of our dataset, thereby hindering the model’s ability to generate nuanced contextual predictions.
However, even if we face setbacks, we’ll still come away with a wealth of insights and documentation. As we navigate these challenges, we’ll engage in critical discussions about our findings, leading to innovative solutions and refinements in our approach. We’ll document our experiments with techniques like state vector reduction and quantum state tomography, which will be valuable for future research endeavors.
So, while the prospect of failure is undeniable, the knowledge we gain from this process will undoubtedly set us ahead. We’ll emerge with a deeper understanding of the complexities involved and a clearer roadmap for tackling similar projects in the future.
App status across various funders