To the best of my knowledge, back-propagation IS learning, whether it’s happening in a neural-net on a chip, or whether we’re doing it, through feedback, & altering our understanding ( so both hard-logic & our wetware use the method for learning, though we use a rather sloppy implimentation of it. )
& altering the relative-significances of concepts IS learning.
( I’m not commenting on whether the new-relation-between-those-concepts is wrong or right, only on the mechanism )
so, I can’t understand your position.
Please don’t deem my comment worthy of answering: I’m only putting this here for the record, is all.
Everybody can downvote my comment into oblivion, & everything in the world’ll still be fine.
I’m suggesting you invest in reading a book on https://www.LeanPub.com called “Algebra-Driven Design”.
the reason I’m suggesting it is that the complex-requirements you’re being persecuted-by, are exactly the sort of thing that that book can help with.
( I’m presuming you code )
By creating the domain’s algebra, & using meta-programming, you can prevent whole categories of bugs.
I only got part-way into the book ( I’m braindamaged, & have beenfailing to learn programming since the 1980’s, when I lost 1/10th of my brain-volume ), but being able to create an APi with ZERO bugs in it, is part of what I’d seen in that book.
I’m a huge believer in keeping requirements divided into dimensions, & keeping those dimensions orthogonal, because once you fail in doing that, you’re fscking doomed.
Here: got the link for you:
https://leanpub.com/algebra-driven-design
_ /\ _