Connect with us


The Future of Personalised Learning



Technology-enabled personalized learning systems continue to capture the imaginations of educators, and for good reason. The promise of tailored learning for every student, providing them just what they need when they need it to master a concept at just the right pace and with just the right kind of help to do so all at a massive scale is tantalizing.

Personalized learning refers to instruction in which the pace of learning and the instructional approach are optimized for the needs of each learner. Learning objectives, instructional approaches, and instructional content (and its sequencing) may all vary based on learner needs. In addition, learning activities are made available that are meaningful and relevant to learners, driven by their interests and often self-initiated. (NETP 2016)

But realizing this vision is incredibly complicated. There are so many variables to measure and control for, so many inferences to judge and decision points to set, that the prospect becomes immediately overwhelming. The natural instinct of many designers of these systems has been to reduce the variables as much as possible, thus exerting greater control over the environment and the learner, with the hope of increasing the reliability of outcomes.

Design Continuum: Closed vs Open

In doing so, designers move to one end of a design continuum. That continuum runs from closed and controlled on the one hand vs open and context aware on the other. The closed, controlled approach reminds me of a virtual reality headset. It encloses the learner in the system, isolating them, and attempts to provide for their every need, without referencing a larger context. It reserves most of the control of the system for itself.

In contrast, I think of an open, context aware approach as an augmented reality (AR) display found on modern fighter jets and more and more in the automotive industry. This approach overlays critical digital information on top of the outside world by projecting it onto the cockpit window or the windshield of a car. It provides the pilot or driver constant information synchronized with the changing external conditions. More recently, AR is showing up in the entertainment industry and even in classrooms. It situates and supports the user in a broader context with the goal to prompt and interpret, but not to control the learner.

Closed Personalized Learning Systems

The closed approach to design characterizes the vast majority of the first wave of personalized learning systems in schools and most of the intelligent tutoring systems that preceded them. As a result, most personalized learning systems tend to:

  • Be stand alone solutions
  • Focus on a single subject area
  • Run independently of other data systems
  • Run independently of the teacher’s input and control
  • Limit the learners agency in decision making about their learning

They are algorithm centric; that is, they assume the algorithm is going to teach the student in the best and fastest way possible. If based on sound research from the learning sciences, it may very well be that the algorithm will outperform more naive approaches. But systems architected in this manner may also have significant disadvantages:

  • They tend to isolate the student from peers and teachers
  • They tend to bog down when a student is not just a little bit off, but a lot off (when Hint #3 and Alternative Explanation #5 still doesn’t do the trick)
  • They provide a siloed dataset that relates only to itself and which tends to be reported only within the system
  • They don’t benefit from knowledge that teachers have about students from other contexts or from insight that students might have about themselves
  • They are generally unaware of learning experiences outside of the system that might have changed the mastery level and/or learning needs of students
  • They are entirely dependent upon the itemset within the system to build their adaptations.

Like virtual reality headsets, they create a learning context around the student and immerse them in it. This provides tremendous control to the system designer to eliminate distractions, which can be beneficial to learners who might otherwise become overwhelmed. But it also blocks out other, potentially helpful, sources of input.

Context Aware Personalized Learning Systems

Personalized learning systems could be more powerful, more useful, more relevant, and more accurate if they reoriented from being algorithm centric to being context aware. This approach still runs on algorithms, of course, but, unlike a closed system, a context-aware system is open to other sources of external input that may be able to provide crucial information. These systems tend to leave a significant degree of decision-making up to the user of the system.

Context aware personalized learning systems would have the following characteristics:

1. Be aware of other points of input and use them to inform its algorithm.

a. They would gather information from external assessments; from other learning systems; from teachers, coaches, mentors, and peers; and from available background information on student’s strengths and needs. They would be hungry for data from outside sources and would use this to troubleshoot and adjust their own assertions and predictions about the learner.

2. Be aware of other points of output and allow data to flow out in machine readable formats.

a. They would feed their output data into broader teacher dashboards and enable a view of the whole student by seamlessly contributing their data as one piece of a larger puzzle. They would provide data about learners in formats useful to researchers, administrators, and school counselors, all according to laws and best practices around privacy and security.

3. Be aware of teaching and learning that happens outside of the system.

a. They would include a mechanism to allow a teacher (or the learners themselves) to input that the learner had practiced or mastered material externally to the system. Of course, the system could still provide a brief quiz, test, or activity to verify the accuracy of the report.b. They would include mechanisms that would also allow a teacher to report potential regression, perhaps after a long absence or summer break. The system would then be alerted to follow up to verify that previously mastered concepts were still intact and remediate as needed.c. More advanced systems could also enable and track learning outside of the system and account for it within the system. For example, they could suggest activities outside of the system that the student is either ready for or that the student needs extra help with. After the external activity takes place, they incorporate the results, verifying mastery as needed.

4. Be aware of and responsive to input from external experts, such as teachers and researchers.

a. They would recognize external learning experts and be appropriately responsive to their input. For example, the system might make broader adjustments to its algorithms, increasing or decreasing difficulty levels based on expert input. Of course, the system could still verify this input with additional measurement and could present any discrepancies between the expert’s assessment and the systems assessment back to the expert for consideration.

5. Be aware of and responsive to input from the student.

a. While learners can be unreliable sources of information about their own mastery, systems that collect feedback from them could benefit from understanding their perceived efficacy, progress, and emotional state, all of which impacts learning and performance. Students could share when they were feeling like the system was going too fast or too slow or to cry “Help!” when they were stuck somewhere and the system was not recognizing their distress. Students could also indicate their level of effort or confidence in their responses. This feedback could be used to adjust to their frame of mind and also could be used to improve the system.b. They would allow the learner to have meaningful agency in their learning. This approach would allow the user to make substantial choices (not just choosing whether to do addition first or subtraction first) that allow them to tailor content to their interests and needs. This approach would work to maximize student choice while still providing sophisticated customization of learning.

Of course, these systems also have drawbacks. As alluded to above, student (and teacher) input can be subjective, incomplete, or just plain wrong. That is why systems designed in this way also need safeguards built in to validate the information coming from external sources.

First Steps

A context aware personalized learning system will require significant change. It will certainly require a paradigm shift among many who design these systems. It will require robust and meaningful data interoperability standards among personalized learning systems at a level of granularity that does not presently exist. It also will require a great deal of new engineering and functionality. It is hard to imagine these kinds of systems being fully functional anytime soon. However, we can start now by creating a foundation for this functionality in our current systems by:

  • Complying with interoperable data standards
  • Creating APIs for both receiving and sharing data with other systems
  • Designing systems capable of processing learning data from external activities and assessments
  • Designing algorithms to evaluate external inputs and adapt accordingly
  • Maximizing and prioritizing features that support learning agency

In short, this approach would open up closed systems to make them responsive to other systems and responsive to teachers, learners, and others outside of the system and what they might do to impact learning. More sophisticated implementations would become a partner with the those outside the system in adjusting the algorithms inside the system while also verifying that the changes are justified.

A Crossroads: Virtual Reality or Augmented Reality

We are at a crossroads in our design approach to personalized learning. We can keep burrowing in, creating an ever narrower and more controlled environment, like a VR world that grows ever more elaborate and detailed, yet exists for and isolates a single person inside an enclosed headset. Or we can turn outward to create context aware systems that function like augmented reality displays that sense, interpret, and add value to the wider world of learning, leaving as much agency in the hands of the learner as possible. The path to context aware systems is longer and more difficult, but ultimately may lead to more meaningful learning and engagement that, in my view, is worth the effort.


About the Author

This article was written by Joseph South is Director of the Office of Educational Technology at the U.S. Department of Education.


The Most Important Tech Job that Doesn’t Exist



Yesterday I asked a prominent VC a question:

“Why is it that, despite the fact that so many successful startup ideas come from academic research, on the investment side there doesn’t seem to be anyone vetting companies on the basis of whether or not what they’re doing is consistent with the relevant research and best practices from academia?”

His response was that, unlike with startups in other sectors (e.g. biotech, cleantech, etc.), most tech startups don’t come out of academia, but rather are created to fill an unmet need in the marketplace. And that neither he nor many of his colleagues spent much time talking with academics for this reason.

This seems to be the standard thinking across the industry right now. But despite having nothing but respect for this investor, I think the party line here is unequivocally wrong.

Let’s start with the notion that most tech startups don’t come out of academia. While this may be true if you consider only the one-sentence pitch, once you look at the actual design and implementation choices these startups are making there is typically quite a lot to work with.

For example, there is a startup I recently looked at that works to match mentors with mentees. Though one might not be aware of it, there is actually a wealth of research into best practices:

  • What factors should be used when matching mentors with mentees?
  • How should the relationship between the mentor and mentee be structured?
  • What kind of training, if any, should be given to the participants?

That’s not to say that a startup that’s doing something outside the research, or even contraindicated by the research, is in any way suspect. But it does raise some questions: Does the startup have a good reason for what they’re doing? Are they aware of the relevant research? Is there something they know that we don’t?

If the entrepreneurs have good answers to these questions then it’s all the more reason to take them seriously. But if they don’t then this should raise a few red flags. And it’s not only niche startups in wonky areas where this is an issue.

For example, I rarely post to Facebook anymore, but people who follow me can still get a good idea of what I’m up to. Why? Because Facebook leverages the idea of behavioral residue to figure out what I’m doing (and let my friends know) without me having to explicitly post updates. It does this by using both interior behavioral residue, e.g. what I’m reading and clicking on within the site, and exterior behavioral residue, e.g. photos of me taken outside of Facebook.

To understand why leveraging behavioral residue is so important for social networks, consider that of people who visit the typical website only about 10% will make an account. Of those about 10% will make at least one content contribution, and of those about 10% will become core contributors. So if you consider your typical user with a couple hundred friends, this translates into seeing content from only a tiny handful of other people on a regular basis.

In contrast with Facebook, one of the reason why FourSquare has yet to succeed is due to significant problems with their initial design decisions:

  • The only content on the site comes from users who manually check into locations and post updates. This means that of my 150 or so friends, I’m only seeing what one or two of them are actually doing, so what’s the value?
  • The heavy use of extrinsic motivation (e.g. badges) has been shown time and again that extrinsic motivation undermines intrinsic motivation.

The latter especially is a good example of why investing on traction alone is problematic: many startups that leverage extrinsic rewards are able to get a good amount of initial traction, but almost none of them are able to retain users or cross the chasm into the mainstream. Why isn’t it anyone’s job to know this, even though the research is readily available for any who wants to read it? And why is it so hard to go to any major startup event without seeing VCs showering money on these sorts of startups that are so contraindicated by the research that they have almost no realistic chance of succeeding?

This same critique of investors applies equally to the startups themselves. You probably wouldn’t hire an attorney who wasn’t willing to familiarize himself with the relevant case law before going to court. So why is it that the vast majority of people hired as community managers and growth marketers have never read Robert Kraut? And the vast majority of people hired to create mobile apps have never heard of Mizuko Ito?

A lot of people associate the word design with fonts, colors, and graphics, but what the word actually means is fate — in the most existential sense of the word. That is, good design literally makes it inevitable that the user will take certain actions and have certain subjective experiences. While good UX and graphic design are essential, they’re only valuable to the extent that the person doing them knows how to create an authentic connection with the users and elicit specific emotional and social outcomes. So why are we hiring designers mainly on their Photoshop skills and maybe knowing a few tricks for optimizing conversions on landing pages? What a waste.

Of all the social sciences, the following seem to be disproportionately valuable in terms of creating and evaluating startups:

  • Psychology / Social Psychology
  • Internet Psychology / Computer Mediated Communication
  • Cognitive Development / Early Childhood Education
  • Organizational Behavior
  • Sociology
  • Education Research
  • Behavioral Economics

And yet not only is no one hiring for this, but having expertise in these areas likely won’t even get you so much as a nominal bonus. I realize that traction and team will always be the two biggest factors in determining which startups get funded, but have we really become so myopic as to place zero value on knowing whether or not a startup is congruent or contraindicated by the last 80+ years of research?

So should you invest in (or work for) the startup that sends text messages to people reminding them to take their medicine? How about the one that lets you hire temp laborers using cell phones? Or the app for club owners that purports to increase the amount of money spent on drinks? In each of these cases there is a wealth of relevant literature that can be used to help figure out whether or not the founders have done their homework and how likely they are to succeed. And it seems like if you don’t have someone whose willing to invest a few hours to read the literature then you’re playing with a significant handicap.

Investors often wait months before investing in order to let a little more information surface, during which time the valuation can (and often does) increase by literally millions. Given that the cost of doing the extra research for each deal would be nominal in the grand scheme of things, and given the fact that this research can benefit not only the investors but also the portfolio companies themselves, does it really make sense to be so confident that there’s nothing of value here?

What makes the web special is that it’s not just a technology or a place, but a set of values. That’s what we were all originally so excited about. But as startups become more and more prosaic, these values are largely becoming lost. As Howard Rheingold once said, “The ‘killer app’ of tomorrow won’t be software or hardware devices, but the social practices they make possible.” You can’t step in the same river twice, but I think there’s something to be said for startups that make possible truly novel and valuable social practices, and for creating a larger ecosystem that enables them.


About the Author

This article was written by Alex Krupp. see more.

Continue Reading


How Google’s AI Mastered All Chess Knowledge in Just 4 Hours



Chess isn’t an easy game, by human standards. But for an artificial intelligence powered by a formidable, almost alien mindset, the trivial diversion can be mastered in a few spare hours.

In a new paper, Google researchers detail how their latest AI evolution, AlphaZero, developed “superhuman performance” in chess, taking just four hours to learn the rules before obliterating the world champion chess program, Stockfish.

In other words, all of humanity’s chess knowledge – and beyond – was absorbed and surpassed by an AI in about as long as it takes to drive from New York City to Washington, DC.

After being programmed with only the rules of chess (no strategies), in just four hours AlphaZero had mastered the game to the extent it was able to best the highest-rated chess-playing program Stockfish.

In a series of 100 games against Stockfish, AlphaZero won 25 games while playing as white (with first mover advantage), and picked up three games playing as black. The rest of the contests were draws, with Stockfish recording no wins and AlphaZero no losses.

“We now know who our new overlord is,” said chess researcher David Kramaley, the CEO of chess science website Chessable.

“It will no doubt revolutionise the game, but think about how this could be applied outside chess. This algorithm could run cities, continents, universes.”

Developed by Google’s DeepMind AI lab, AlphaZero is a tweaked, more generic version of AlphaGo Zero, which specialises in playing the Chinese board game, Go.

DeepMind has been refining this AI for years, in the process besting a series of human champions who fell like dominoes before the indomitable, “Godlike” neural network.

That victory streak culminated in a startling success in October, in which a new fully autonomous version of the AI – which only learns by playing itself, never facing humans – bested all its former incarnations.

By contrast, AlphaGo Zero’s predecessors partly learned how to play the game by watching moves made by human players.

That effort was intended to assist the fledgling AI in learning strategy, but it seems it may have actually been a handicap, since AlphaGo Zero’s fully self-reliant learning proved devastatingly more effective in one-on-one competition.

“It’s like an alien civilisation inventing its own mathematics,” computer scientist Nick Hynes from MIT told Gizmodo in October.

“What we’re seeing here is a model free from human bias and presuppositions. It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same.”

But things are moving so fast in this field that already the October accomplishment may have been outmoded.

In their new paper, the team outlines how the very latest AlphaZero AI takes the self-playing reliance – called reinforcement learning – and applies it with a much more generalised streak that gives it a broader focus to problem solving.

That broader focus means AlphaZero doesn’t just play chess. It also plays Shogi (aka Japanese chess) and Go too – and, perhaps unsurprisingly, it only took two and eight hours respectively to master those games as well.

For now, Google and DeepMind’s computer scientists aren’t commenting publicly on the new research, which hasn’t as yet been peer-reviewed.

But from what we can tell so far, this algorithm’s dizzying ascent to the pinnacle of artificial intelligence is far from over, and even chess grandmasters are bewildered by the spectacle before them.

“I always wondered how it would be if a superior species landed on Earth and showed us how they played chess,” grandmaster Peter Heine Nielsen told the BBC.

“Now I know.”


About the Author 

This article was produced by Grendz. Grendz is the definitive place for new mind-blowing technology trends, science breakthroughs and green and positive ideas and news. Sign up is Free and special services are available. see more.

Continue Reading