Self-reflection: How to make it more effective

"And why is it so important?"

I have a couple (unique) components that I think are worth mentioning with their aid in the effectiveness of self-reflection.

Retroactive learning
"An agent in a realtime environment may not have time to apply an iterative learning algorithm while it is performing a task. However, when a resource like time becomes available, the agent can replaythe events and learn from them.undefined"

This reminds me of cognitive load, namely how in a realtime environment, intrinsic cognitive load is high causing germane cognitive load to reduce (the latter which you want to maximize), as one only has a finite working memory capacity to allocate between the 3 main components of cognitive load: extraneous, intrinsic, and germane.

Extraneous cognitive load
The working memory resources spent on irrelevant and distracting stimuli.

Intrinsic cognitive load
Calculating 2 different numbers every 10 seconds produces a much lower intrinsic cognitive load than calculating these same 2 numbers every 5 seconds.

Germane cognitive load
Encoding and decoding long-term (working) memory, or like Wikipedia describes it:

"Germane cognitive load is the processing, construction and automation of schemas."

This latter component, you want to maximize.

So to recap, the ability to reduce intrinsic cognitive load when engaging in self-reflection and simultaneously (in parallel as opposed to concurrent) maximizing germane cognitive load, is one of the heavy weighing components making self-reflection effective.

Making retroactive learning more effective


But how can we increase its effectiveness? One way to reduce intrinsic cognitive load even further is by lowering the minimum required speed of processing. How? E.g. by writing information down on paper or computer.

"Analogous to writing down a math problem on paper rather than trying to solve everything in your head."

This has to do with another component increasing intrinsic cognitive load: time-based resource sharing model of working memory:"This theory assumes that representations in working memory decay unless they are refreshed. Refreshing them requires an attentional mechanism that is also needed for any concurrent processing task.[1]"

Machine learning equivalent of retroactive learning: Backpropagation
''Not to be confused with: reverse propagation, which is making the inverse connection e.g. from "A → B" → "B → A". Backpropagation could also create the inverse connection, but it also includes "A ↔ B"''Backpropagation is essentially propagation but in opposite direction, which is analogous to retroactive learning.

"Backpropagation: updating the weights and connections to minimize loss."

With loss referring to incorrect estimate(s).

Related to “opportunity cost”: the closer to zero, the closer you are to the best possible option (the global maximum, as seen in the image above).

Calculating the local/global maxima/minima can be done via:"Rate of change in quantity - rate of change in quality = 0"Swapping "quantity" with "quality" is possible, but could cause problems in the area of human intuition e.g. the global minimum being the most efficient point in the multi-dimensional phase space (being a semi-structured data as opposed to structured data).

Learning rate
Another heavy weighing concept in machine learning is what’s called learning rate:

"In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function.[1]"

Backpropagation has the intent of increasing/maximizing the learning rate (unless otherwise specified).

The step size per iteration can be “measured” via construal level theory and its primary dimensions:

Construal level theory
Construal level theory is a continuum ranging from abstractions (high-level construals) to concretes (low-level construals). A continuum being on the ordinal scale.

The 5 primary dimensions of construal level theory are:

If the steps and/or iterations are of higher-level construals, the learning rate is higher (assuming "≥minimum required accuracy for e.g. formulation of conclusions") e.g. steps/iterations featuring higher-temporal distances, with exception to hypothetical, where it's the inverse.
 * 1) Temporal (something with a long half-life is more abstract and is a higher-level construal);
 * 2) Spatial (with longer distances being higher-level construals);
 * 3) Social (e.g. the biopsychosocial model is a higher-level construal than a specific individual);
 * 4) Hypothetical (the higher the probability, the lower the construal level);
 * 5) Informational (the more something connects with other concepts, the lower the construal level (conjunction) or the higher (disjunction)). E.g. “A and B” (conjunction) is more concrete than “A or B” (disjunction).

Higher-level informational construals in conjunction with disjunction increases learning rate up until a certain point (local/global maximum).

Lower-level informational construals paired with conjunction can, beyond that point (local maximum), increase learning rate due to its feature of possessing a lower quantity of manipulable information (i.e. increasing the steps per iteration while "≥minimum required accuracy"), keeping the principle of indifference in mind.

Backpropagation, itself, also has a learning rate.

Back to human terms: (self)-reflection increases one's (future) learning rate, while also increasing the speed and accuracy of reflection.

Manipulable information


What backpropagation in parallel does, is changing the quantity of manipulable information.

This, hopefully, allows one to increase one's accuracy and/or speed of e.g. conclusions, actions, etc.

If you have learned the letters "A" and "B" and via backpropagation connect them and learn "AB", you essentially increased your information pool from 2 to 3 elements.

Personally, I like to call this process "associative combinatorial hierarchical learning". Albeit each word might seem redundant within the same concept, I did so to increase awareness (detail) whenever decoding and applying the concept (analogous to being consciously unconsciously competent).

The latter being analogous to increasing one's quantity of manipulable information (and, hence, e.g. accuracy).

Again, increasing one's quantity of manipulable information increases/decreases learning rate up until certain criticalities (sometimes it's better to decrease one's quantity of manipulable information e.g. conjunction or theory/ontological reductionism).

Consciousness


One main advantage of consciousness is the capacity to think longer (higher half-life) about matter (as opposed to the more fleeting subconscious content).

Consciousness allows the process of reflection to happen with a much higher temporal half-life.

Vice versa, reflection might increase the temporal half-life of your consciousness about those you reflect on (and the probability of those entering your (pre-)consciousness in the first place via global ignition/spreading activation).

Spaced repetition
Spaced repetition can increase the half-life of the selected reflections.

It can also function as your second brain. As your second subconsciousness. Even as your second long-term working memory (albeit slower).

Your long-term memory could be seen as your working memory, but a lot slower (a continuum).

Extra: Analogs

 * 1) Minimum required intrinsic cognitive load is analogous to computational complexity e.g. time-based resource sharing model of working memory
 * 2) Reducing the quantity of manipulable information while "≥minimum required accuracy" is analogous to theory and ontological reductionism.

To-do list

 * spaced repetition → storing self-reflection + indirect and automated self-reflection → spacing effect → e.g. study-phase retrieval theory
 * action research
 * more pictures
 * sleep → increasing level of construals | same with self-reflection
 * conversion from declarative to procedural → e.g. half-life | “combining experience with theory” | top-down vs. bottom-up
 * reductionism as a thinking strategy + link
 * prioritize content
 * backup in Google Docs