Updated the github to put scripts and data: https://github.com/joy-void-joy/alexplainable
@epiphanie_gedeon
$0 in pending offers
Épiphanie Gédéon
7 months ago
Updated the github to put scripts and data: https://github.com/joy-void-joy/alexplainable
Épiphanie Gédéon
7 months ago
Here I think that "a system built of n (N-1)-level models" would likely be much safer than "one N-level model" for reasonable values of n
That makes sense, in my understanding, this is also the approach of coem. What I would worry about is the composition being unreliable and failing spectacularly without a precise review system.
I think this is less likely to happen if the composition layer is very reviewable (for instance, written in plain code, but all other (N-1)-level models stay opaque). To keep using the prototype as an example, if it can be reviewed that we are using a "disk surrounded by lots of petal"-detector thoroughly, then overall performance would indeed force each model to be closely aligned to their tasks. I would be interested to see how true this is, for instance, what happens when we train models together this way.
I think your point really makes sense, and is one indication to focus on one steps thoroughly rather than on all the process at once.
If you want to talk more about this, please feel free to write me at epiphanie.gedeon@gmail.com or book a call
Épiphanie Gédéon
7 months ago
Thank you very much for the kind words, feedback, and generous contribution! We'd be very excited to work on this, and this helps us move toward our minimal funding goal greatly!
As we want to have concrete results, one of our main focus would indeed be on this modular-by-design approach, trying to break each modules down into parts until they become small enough that we can train and control their output to ensure they are working on the correct tasks.
If I understand your concerns correctly, you see more value and practicality in the first steps of this decomposition (taking one big N-level model and decomposing it into n (N-1)-level models) rather than the last steps (e.g. taking one model trained on "white petal", and recoding it into a white long oval shape detector without drop of performance)? Based on our small prototype and how strong composability seems to be, I would expect the largest performance hit to occur primarily in the initial decomposition steps, and for decomposition to hold on until the end. However, this is something we will closely monitor and adapt to if this does not seem to be the case.
One failure mode we are cognizant of with one-step-decomposition is that of plasticity: Even a one-layer convolution network seems to be able to adapt to a submodel that has "learned the wrong task" (this has happened in our prototype before correcting for it), and so we can't just use overall performance as a measure for how well each submodule have learned what we intend them to. What metrics or training methods we could use to track this better and ensure things compose as we want them to would be one central question of our project.
With respect to type of AIs, we are still considering this: We originally wanted to investigate GPT-like systems directly, but are currently worried about both expanding capabilities and safety concerns.
Once again, thank you for your support and valuable insights. We look forward to sharing our progress and findings!
For | Date | Type | Amount |
---|---|---|---|
<06e6a51a-b287-41af-92c8-73672aceed02> | 7 months ago | tip | 1 |
<06e6a51a-b287-41af-92c8-73672aceed02> | 7 months ago | tip | +1 |
<06e6a51a-b287-41af-92c8-73672aceed02> | 7 months ago | tip | +1 |