Learning compression score

"Learning compression score = Quality / Size reduction"

Calculating Learning Compression Score
This method, which I will call “Learning compression score”, is a (futuristic) method that I can imagine will accelerate learning.

Essentially, this “score” says how efficiently you learn things from a particular source in terms of time e.g. two similar articles may talk about the same idea, but one of them explains it in half the time required to read it.

So how can we “calculate” the compression score of a source? Let’s say you are aware of a book containing 70,000 words but, instead, there’s an article summarizing the book in only 1,000 words. In other words, the source has been compressed by ~99%. This, however, is just a component for calculating the learning compression score.

In order to calculate the compression score, we have to “calculate” how much (subjective) “quality” we have retained from the original source. Often, the quality goes down when summarizing sources, but in rare cases, it might even go up.

Let’s say we have retained 80% of the “quality”. Now, in order to calculate the compression score, we have to divide 0.8 with 0.01 = 80."This number essentially means that every second you spend on reading the summary, you learn things 80x faster than when reading the original source."

Futuristic Method To Enhance Learning?
I think we can become more and more precise and efficient in calculating things like how much calories, proteins, and vitamins certain food contains. In fact, we might even be able to do such things in the future via brain augmentation by simply staring at food.

Perhaps it might even be possible to calculate more precisely how valuable and efficient a certain source would be to read for us. I mean, we already are doing that (subconsciously) based on our feelings of curiosity, joy, and so on.