It’s finally published. This report is a stunning body of work and emphasizes the overwhelming need the United States has to be a (the?) dominant player in artificial intelligence (AI).
One of the things I love is the acceleration in what brain scientists are learning about what makes us tick. With fMRI and other methods neuroscientists are really pushing the knowledge envelope and uncovering all sorts of cool stuff about how that mass in our skulls works and allows our minds to function (or not function, if one has a mental illness). One of the findings presented in this video is that yes, sleep enables your brain to be more creative when you’re awake but sleep does a lot more for us.
This GoogleTechTalk is with Matthew P. Walker, PhD, who is an associate professor in UC Berkeley’s Psychology Department Sleep and Neuroimaging Laboratory. He covers topics such as the impact of sleep on human brain function, especially in learning and memory; brain plasticity; emotional regulation; affective and clinical mood disorders and aging.
This video is a bit long but worthwhile if you have an interest in this topic like I do. NOTE: Usually I watch these using my $99 AppleTV box and its YouTube app. Since I’ve subscribed to this GoogleTechTalk “channel” on YouTube I can simply select it and away I go. Much better than sitting at my computer for over an hour!
Can the lessons learned from video games point the way to a new fail, fail, fail and learn model for K-12 education?
Whether you are a Republican or Democrat, parent or teacher, employer or employee, trainer or trainee, one thing is clear: traditional models of learning are being attacked from all corners as broken, virtually unchanged since the 1890s, and desperately in need of fundamental reform.
You’ve seen or heard the statistics about India’s top 10% of K-12 students being more in number than all the students in the U.S., and that the Asia Pacific region graduates more PhDs in one year than the U.S. does in 10.
Questions abound about how to fix it:
- With the world’s information increasingly at our fingertips with an internet we’re connected to with computers, smartphones and tablets — at home and mobile — how much information do we need to pack in to our brains like traditional K-12 models emphasize?
- Now that cognitive scientists, psychologists and education-oriented startups are gaining new insights in to ways in which students can learn and do so quickly, what are the right models?
- With gaming and game theory being viewed by many experts as the best way to move in to a model of fail, fail, fail and learn…what works? Will all our kids be taught with Halo3 or other off-the-shelf games?
What’s the fix? This is a complex question and I’ve watched several talks, by experts in the field, and a new Minnesota startup (CogCubed) has compiled several videos on one page here that you should watch if interested. What’s pretty clear after watching them all (which I’ve done over the last few years) is that there are some great ideas out there but few ‘platforms’ upon which people can build fail, fail, fail, learn applications.
Let’s face it: without platforms (e.g., computers, the internet, desktop & now ebook publishing) and higher level tools and approaches, new innovations and industries struggle to emerge, even with great ideas and directions!
What was a big surprise this morning was discovering just such a platform company for new ways of enabling students to engage in learning that encourages play, manipulation, failing and ultimately learning. Sifteo is a “…venture-backed startup based in San Francisco, California. We make Sifteo cubes, an interactive game system designed for hands-on fun and Intelligent Play. We also make a growing number of unique and exclusive games for Sifteo cubes.“
Rather than me telling you more, go view those compiled videos above and then watch this very short introduction by David Merrill about Sifteo. If you don’t come away with interest, intrigue and the ability to visualize new emergent models of learning, I’ll be even more surprised:
To learn more, here is David Merrill’s talk at a recent TED conference or just go to their website.
I used to be a bit disturbed over how simple it was to manipulate photographs. Now the video/film manipulation has far outpaced that and can make whatever vision the director has possible. I’ve now watched this video ten times and I still find it delightful to see what can be done with strategically placed green screens and matching footage. My favorite parts are the walk through Red Square in Moscow, the ship on fire and the snow scene probably shot in July in L.A.
Watching this also is heightened if you have an appreciation for the challenges in matching the lighting in the scene and other environmental conditions.
What happens when fun, photorealistic 3D characters are matched with this kind of realism? Though many say we’re a long ways off from being able to faithfully recreate a human digitally, I’m not so sure that we’re closer than people think. The fun aspect still exists with many 3d photorealistic characterizations — and it’s easier to pull off believability when it’s basically a major stepup from a cartoon (e.g., Toy Story, Up, Shrek) but what happens as the creation and rendering technology gets so good that it is indistinguishable from reality?
Heavy Rain is an upcoming game that has gamers all abuzz about its photorealism and you should watch this HD trailer (you have to watch a lo-res advertisement first so hang in there) to see why there is so much excitement. Yeah, it’s awesome. OK…it’s still easy to tell it’s a game.
But for how much longer?
When Apple released the Apple II at the West Coast Computer Faire in 1977, it was a big deal with its color display. Since I love poking around FORA.tv and watching the thought leader videos curated there, I was pleased to see this snippet of a Steve Wozniak (Woz) interview (you can watch the entire hour+ program here) about the spark of genius. The cool thing? As you listen and watch Woz describe how he came up with the idea to deliver color computing for a radically reduced price, it is the quintessential description of problem solving and creative solutions to problems.
This was recorded at the Bay Area Discovery Museum on February 1, 2010 and they describe it this way:
Steve Wozniak, Apple co-founder and philanthropist in conversation at the Discovery Forum 2010 with Emmy-award winning journalist Dana King from CBS 5 Eyewitness News.
Renowned technology pioneer Steve Wozniak speaks to the importance of hands-on learning and encouraging creativity, and how the Bay Area Discovery Museum is a critical resource for preparing children for the challenges of the 21st century.
The Discovery Forum serves to increase awareness about the importance of childhood creativity, and raises support for the Museum’s educational exhibitions and programs.
Watch this couple of minute segment (yes, there are ads first) and you’ll see what I mean about creative problem solving:
One of the dangers in being a “thought leader” or “influencer” in blogs or social media is this: others might actually believe you’re an expert and take what you say on faith, as gospel, or as their duty. On the flip side, those of us who follow so-called thought leaders make some assumptions that they’re experts or at least more plugged in than we are so they must know something we don’t (and too many people are influenced by them automatically). I’ve been seeing this happen too often in the group-think that occurs in the blogosphere and this sort of mass persuasion (or “mass meme’ing” as my friend Bill calls it) is now moving even faster with the real-time internet (e.g., Twitter).
In my several decades on this earth I’ve learned the power of propaganda, seen the unfortunate downsides to “spin” and group-think, and have been made well aware of the persuasion, motivation and psychological manipulation techniques most people with an agenda employ.
Having an agenda and trying to persuade or motivate is not inherently evil or good, it just is-what-it-is. Humans are driven by all sorts of intrinsic motivations that go well beyond Maslow’s baseline on his hierarchy of needs. In my view, Maslow was stating a pyramid of needs that was far too happy-assed and missed many human motivators like a hunger for celebrity, power or control by an individual or organization, the continual nation-based struggle for resources, or a need to be dominant.
Think about all of this the next time you read something (especially a blog post or tweet), listen to a political speech, are asked to do something by your boss, or watch a TV show or movie about a big topic. What are the writer/tweeter/producers motivations? Who is funding it and/or what is their agenda? What are the creators of it trying to get you to do, to think and what action do they want you to take?
In 2004 Steve Jobs famously said about TV vs. computers, “We think basically you watch television to turn your brain off, and you work on your computer when you want to turn your brain on.” It was one of those statements that seemed like a throwaway (and one most of us did the old head bobbing up-n-down about), but it’s become more and more true since then.
My wife and I often take our laptops upstairs and lie in bed finishing up the days emails, exploring, and increasingly watching “TV”. In fact, my brain gets SO turned on that I find it hard to go to sleep…so I’ve actually stopped doing that in order to relax, quiet down and nod off (and older relatives have cautioned on how “you’re going to ruin your marriage” by playing with our laptops at night vs. with each other).
When I first saw the delightful Alec Baldwin Hulu ad on the Super Bowl — with its clear and humorous reference on how TV watching turned your brain into a gelatinous mush they could scoop out and eat (since they’re aliens, after all) — the brilliance of the campaign took my breath away.
It did so because of the NBC team’s recognition that most of us in the always-on, always-connected participation culture — increasingly turning our attention away from all traditional mediums like TV, radio, newspapers and magazines — view television watching as the mind numbing, brain mushing pursuit it is, but still one we turn to when we choose to be entertained passively.
The team obviously recognized that doing a fun advertisement to get our attention, directly addressing this obvious fact within it and, of course, delivering a service that meets our needs whether we’re watching an actual television set or have our brains turned on with our computing devices, they nailed it.
Jobs nailed it too over four years ago with that statement. He didn’t say anything about turning your brain on to perform tasks, but rather computers as an extension, a stimulator of our brains.
As we all move away from purely linear, serial tasks and processes toward a world where we drink in information, news, entertainment while connecting with others in a parallel and associative way, I’m eager to live in this time of awakening where more and more of us are living in a perpetual state of having our brains turned on.
A final update on our experience with Learning Breakthrough (LB) since many people are following along and interested.
No question we received benefit from LB…it just wasn’t effective enough. Unfortunately, it became a burden and my son was pulling back from it and goofing around, so we ended up not moving forward after the first five months. We’d read that there was a plateau period and we moved past that, but the benefits we were receiving from LB just wasn’t enough of a payoff for the effort we put into it.
Learning Breakthrough (or Dore, in my opinion) is probably as good as having an ADD/ADHD person performing daily aerobic exercise and eating a good diet…and we all know how few of us do the things we know we should, and trying to get a kid to stick to something like LB is quite a challenge.
Then a Doc (Dr. Chuck Parker) who writes CorePsychBlog sent me an email since I’d written about brain SPECT imaging on this blog. Having the SPECT analysis helped us identify the subtype of ADHD my son was experiencing. Parker and I went back and forth, I helped him with his blog, and he ended up offering to work with my son (though a local Doc has to prescribe). Parker’s belief is looking at the whole person, the “core” of the psychology, vs. just treating or focusing on one area like the cerebellum (which is the area of the brain positively affected by Learning Breakthrough or Dore).
I knew it. I can see into the future and so can you. Here’s how and why this phenomena explains why optical illusions trick us.
Researcher Mark Changizi of Rensselaer Polytechnic Institute in New York says it starts with a neural lag that most everyone experiences while awake. When light hits your retina, about one-tenth of a second goes by before the brain translates the signal into a visual perception of the world.
Changizi now says it’s our visual system that has evolved to compensate for neural delays, generating images of what will occur one-tenth of a second into the future. That foresight keeps our view of the world in the present. It gives you enough heads up to catch a fly ball (instead of getting socked in the face) and maneuver smoothly through a crowd.
When you really dig into why optical illusions work, it’s your brain compensating for that lag and anticipating, assuming and predicting what happens (or should happen) next.
This has more meaning for me than most though.
As someone gifted with an Attention Deficit Disorder (ADD which I do view as a gift) and the father of an ADD daughter and a 13 year old son with Attention Deficit Hyperactive Disorder (ADHD and more gifted than I), I’ve come to learn that one reason for this ‘syndrome’ is a lag in the cerebellum caused by reduced blood flow in the prefrontal cortex.
One thing the three of us share is the ability to see things that other people don’t see and other advantageous attributes: associations between seemingly non-associated things (i.e., connecting the dots); an inability to block input thus causing us to take everything in to our brains; and a frustration with linear and serial anything, compelling us to find ways around obstacles and barriers and cut-to-the-chase.
The trick for non-ADD/ADHD people is to place yourself in positions to take it all in and not turn it off. Let yourself be inundated with information, frustrated with process and procedure, and you’ll find yourself seeking those spaces and solutions that connect dots. It’s worked for many people I know and they’ve then felt the benefits of the gift I feel ADD and ADHD is for my kids and I.
Just for grins, take a look at probably the best compilation of optical illusions on the ‘net and you’ll find your brain hurting after just a few!
Facing a six hour adventure to get home from New York yesterday, I stopped in an airport bookstore to see if something caught my fancy that would be an immersive read. In the days when I traveled over 80% of the time, I remember buying magazines (then much less than the $5-$10 they are now) but even then most were like needing a good meal and instead sitting down to a plate of cotton candy. Not very satisfying and pretty ephemeral.
The book I chose was Norman Doidge’s The Brain That Changes Itself. Doidge takes us on a journey into the developments of brain science which has led to the current state of brain scientists understanding that the brain is “plastic” which can be molded, shaped, and rewired, “For years the doctrine of neuroscientists has been that the brain is a machine: break a part and you lose that function permanently. But more and more evidence is turning up to show that the brain can rewire itself, even in the face of catastrophic trauma: essentially, the functions of the brain can be strengthened just like a weak muscle.”
There were many aspects of this book that leapt out at me but one key point I’ll bring up as I recommend this book: permanently imprinting and creating brain maps (i.e., permanent behavior changes, knowledge permanence, automatic responses and deep, intuitive understandings) only happens when a human or animal is focused and paying close attention.
That’s right. Multitasking (Linda Stone positions it as, continuous partial attention) WILL NOT hardwire our brains and anything we’re learning, hoping to absorb permanently or habits we’re intending to change….won’t.
Doidge brings up numerous examples of brain rewiring and plasticity which I’m thinking about now and have lots of questions swirling about: What happens to our brain maps and wiring when our conceptual and spatial awareness extends in to the virtual? (I’ll bet you can visualize in what folder on your computer sits that important document or photo…or what’s on your friends wall in Facebook from last night). Will automating processes begin to replace the need to hardwire them into our brains? When we all have mobile computers in our pockets and can instantly look up anything, will we need to permanently imprint knowledge?