After I wrote the post Effective Green Screen Gear You Can Buy Inexpensively, many people asked me about where I get my virtual backgrounds for usage in Zoom so I’ve added links below.
I thought I’d toss up a few screenshots of me using a few of the backgrounds I’ve downloaded. While the first one in the upper left of the image above is a composite one I created in Photoshop from two others, all the rest were downloaded from free sites like these:
But it’s not just Zoom that offers virtual background capability. Many other virtual meeting software offerings are scrambling to offer virtual background capability since people love the feature. Here are links to a few popular offerings with direct links to their virtual background help pages:
Hopefully it will ship before my next scheduled webinar, but the $295 Blackmagic ATEM Mini will help me go to the next level. This device will enable me to input up to four HDMI inputs (like all of my good cameras) as well as HDMI out from my laptop, iPad or iPhone.
One of the best features of this device is that it will also enable me to input virtual backgrounds directly in to the switcher. So, for example, when I switch from my laptop presentation to my live camera feed on me, its output is as a “webcam” so will show me over the virtual background of my choice. In addition, every app that accepts a webcam will see me overlayed on a virtual background.
If you’re interested in the ATEM Mini (or the ‘Pro’ version) you might want to check out a few of these videos on YouTube.
BUT IT’S NOT JUST VIRTUAL BACKGROUNDS…
Don’t look or sound like you are using a computer, smartphone or tablet for the very first time when you’re in a meeting! Even if it is your first time, practice with someone beforehand and, at the very least, LOOK AT YOURSELF so you can come across well online.
These tips are the best ones I’ve found yet on YouTube, and thought you might enjoy it:
Good luck and stay safe.
Wanted to show-off a bit with some new gear. I’ve been doing a bunch of tech-coaching for a guy I know and helping him with his website and, along with other client connections, I’ve increasingly been on webinars, online meetings, and Skype group calls. I was sick and tired of my crappy looking video, so I bought a green screen, some lights, and after a lot of goofing around to figure stuff out, it’s all up-and-running and working flawlessly.
For effective green screen video one needs good lighting and, most importantly, a high resolution camera. Unfortunately webcams don’t work (even though so many people insist they do), green-fringing is startling obvious when I’m superimposed over some image or video, so I invested a bunch of time in figuring out how to use my Nikon D500 DSLR as a very expensive webcam!
Here is my set up:
- Nikon D500 DSLR set in Live View with tweaked settings so it doesn’t automatically shut off after 10 minutes! With my lens, this is a $4,000 camera. If you do not have a good resolution camera, then a good mirrorless or DSLR model will set you back $1,000 – $3,000 or so.
- UPDATE: Here is a good 2020 post with some options on cameras that can be less expensive: The 6 Best Video Cameras for Green Screen (Chroma Key) in 2020
- LED Lighting kit = $300 (at Amazon) These LEDs are very flexible. They are bi-color variable with a temperature range of 2300K – 6800K so it is easy to warm up or cool down the color temperature of the light. It also has a brightness range of 10~100%. Pretty dang good for cheap lights!
- Soft boxes for those lights = $80 (at Amazon) I had to have diffusing for these lights as they were just a bit harsh when maxed out in brightness.
- Elgato Green Screen = $160 (at Amazon) Though I’d like one a bit wider, this is the most perfect product of its type I’ve seen yet.
- Blue Raspberry microphone = $220 I own this one since it also works with iPhone and iPad.
Though I already had the camera and microphone, for just under $600 I added good quality green screen video capability. (NOTE: In the photos below you’ll notice a RODE microphone on top of my Nikon D500. I only use that when recording video in to the camera’s storage, usually for remote set ups).
As a lay student of history, I’ve been thinking a lot about what it must’ve been like as the world shifted from an agrarian, farm-based economy — with most people living on farms — to a mechanized, industrial one in the late 1700’s and early 1800’s when people migrated to cities and to jobs in factories and offices.
According to Wikipedia, “The industrial revolution brought about various shifts in agriculture, manufacturing, and transportation, which had a profound effect on the socioeconomic and cultural conditions in Britain. The changes subsequently spread throughout Europe and North America and eventually the world, a process that continues today as industrialisation.
The onset of the Industrial Revolution marked a major turning point in human society; almost every aspect of daily life was eventually influenced in some way.”
Think about the pain and angst people felt as their kids left home for the city, children labored in factories, wages were low and conditions horrendous, and how much time it took for some sort of equilibrium to occur. It took many decades.
I would argue that we’re right in the midst of an internet and cleantech revolution that’s just begun and is influencing almost every aspect of daily life right now. As Bruce Sterling so famously said, “The future is here. It’s just not evenly distributed yet.”
The internet, and my business, personal and learning use of it, has fundamentally changed my life and those around me. The same could be said for many others I know. Of course, then there are those in my life that don’t even have computers, or use their mobile phones for voice only. It will take years (decades?) for the future internet to get evenly distributed, though I predict it’s going to happen far faster than anything that’s come before.
Today’s announcement by Adobe of the Open Screen Project has been well covered in the blogosphere. What hasn’t been well covered is the story-behind-the-story and that this is a major salvo in the hybrid application war.
I’ve written before about the rich, internet application (RIA) space (here, here and here for example) and the momentum being built behind the tools, approaches and delivery containers with content, data and functionality mashed up and delivered in a hybrid manner.
As the world is increasingly connected and broadband/wireless speeds increase (and device types proliferate with internet connectivity), the demand for more and more functionality integrating the desktop and the internet is accelerating and the major vendors (and open source ones) are trying to figure out how to empower us to create and deliver new digital assets that customers will value and buy.
What isn’t discussed much is the now primarily covert ‘war’ underway between Adobe with Flash (and AIR, Media Player, et al), Microsoft with Silverlight, Apple with WebKit (though little has been intimated publicly on what they might do in the RIA space or how they might leverage the stealth Quicktime installs on Windows with iTunes and the recent Safari Windows release) and Mozilla’s Prism. All are focused on how to provide a winning environment upon and within which content creators, developers and strategists can deliver ever higher value and create competitive advantage for they and their companies. Whoever pulls that off will win.
Four very different approaches, market positioning, tools to create and develop, and overall go-to-market plans (most of which an outsider can only guess at) but the promise of RIA’s is huge for applications and for us, whether we want to create-n-deliver or just enjoy the fruits of the labors of others: replacement for current web apps; completely new categories; and even one area we’re already exploring in my company, a new type of subscription/self-updating ebook that RSS feeds, video and audio automagically appear within when a subscriber opens it and is connected to the ‘net.
Who will win? I don’t know yet but the winner will be the one with the best tools, the largest runtime container distribution, and the most support from the ecosystem surrounding them. The momentum is with Adobe but, then again, it was with Apple in 1980 at the dawn of the personal computing industry, and we know how that turned out.
The ‘sprout’ (their term vs. ‘widget’) you see below is one I created in 15 minutes. It took me longer to open Photoshop, reduce the size of the Connecting the Dots header and to type in the pathnames to my podcasts (yes I know…they’re OLD) then it did to create the sprout!
I just grinned and shook my head in disbelief as I used it since Sprout has delivered on my pent up desire to have just such a mashup and creation tool which begs the question: why the hell didn’t Adobe do this with their rich internet application (i.e., RIA or Adobe Integrated Runtime (AIR)) strategy? To date mere mortals — who are savvy enough to use InDesign, Photoshop, Illustrator and the like — can’t truly deliver on AIR, Microsoft Silverlight or even Webkit apps unless the propeller on their beanie is fairly large.
There are a few nits (the words “Click on any playlist…” were bolded and italicized which didn’t publish) but they’re so few compared to the power Sprout has unleashed they’re easily overlooked. I also want to understand what they’ll charge for the service — or those I direct to Sprout to create — before I get too fired up about recommending people leap on the tool and deliver mission-critical products.
I also noticed a slight latency as my ‘sprout’ loads which you might notice also. I’ve been a broken record on the topic of the “dirty little secret” — that Internetwork latency is already affecting mashups, Web/Enterprise 2.0 applications, video delivery and essentially everything we do over the Internet — but this latency won’t likely slow down the creation and delivery of mashed up applications. I hope, really hope, that this latency doesn’t crush the spirit of those of us truly wanting to create and deliver significantly higher value on the Web with tools like Sprout.
Using this tool for 30 minutes tonight has sparked about 25 ideas for how I’d use it. From completely self-contained multimedia slideshows to a different sort of ebook to a poor man’s RIA, I suspect many others will have exactly the same reaction and start building these things like mad.
In my work it’s imperative I stay abreast of new technologies, approaches and how social media startups are figuring out how to increase our capability to connect to one another in more interesting and meaningful ways.
But how many places can we focus our attention?
I blog. Follow and skim 138 blogs and dozens of news feeds in Google Reader. Deal with dozens of emails per day. Scan Techmeme and Blogrunner. Post and follow people on Twitter and now Pownce. Barely use Facebook but feel compelled since so many people I know are using it. Just joined Seesmic (in private alpha) which is a social network for participatory video (see what your friends post, you can post, and a ‘conversation’ can carry forward). Scroll through Digg‘s feed and often click on an article.
Oh….and I have work to do for my clients and business!
Since one my strengths is “input” (collecting information is something I love to do), I thought my scattered focus and partial attention was atypical until I talked to dozens of other people. Nearly everyone I talk to is feeling the effects of traditional media clamoring for our attention, more coverage and news with less analysis than ever before, and thousands of new media methods (some which I mentioned above) that are connecting us in ways that making it very challenging to think, mull it over and breathe.
Many business leaders feel that this continuous partial attention is a Millenials or kids phenomena, but my own anecdotal research shows that this is increasingly cutting across all age groups, demographics and cultures (Linda Stone has the seminal thoughts on the topic).
Anyone with a computer and internet connection is now a mini-media mogul since it’s trivial to publish, create radio and TV (even live streams ala uStream, Qik, Stickam), deliver screencasts and learning content, and stake a claim in the micro-blogging arena (e.g., Twitter, Pownce) and snag followers tuning into your thought stream.
With all of these sources coming at us (or those we choose feeling compelled or pressured to stay abreast of their content) while we pay continuous partial attention to each, what happens to these attention traffic jams in our brains? How can we discern what is worthy of our attention since not all of it is?
My daughter had a college paper to do and ended up doing it on, “Old and New Media Influence on Anti-American Sentiment“.
What was fascinating was to read this report (PDF) from May, 2007 entitled, “The Communication of Anti-Americanism: Media Influence and Anti-American SentimentÃ¢â‚¬ by the Department of Communications at Cornell University and see that this massive research study focused on traditional media and completely left out new media!
They examined all sorts of statistics and variables in the report: country, age, income, media habits, and much more. The problem in leaving out new media is that most people under 30 have radically reduced their consumption of old media and instead are having their perceptions molded and shaped by exposure to all sorts of opinions and alternative new media forms.
Her argument was that negative perceptions of America were being molded and shaped by all media, not just traditional media. In an age when many globally are eschewing broadcast media for social network’s, YouTube, SMS, blogs, and shows like The Daily Show or even Al Jazeera offerings, there is no doubt that any thoughtful consideration and examination of public opinion and cross-cultural perception must include new media forms.
As I wrote this looking at that goofy picture of Ze Frank (which must frighten children and small animals), I thought about how tough it would’ve been for Nazi propaganda minister, Joseph Goebbel‘s, to have done what he did for perception-controlling had the Internet existed in the 1930’s.
Mogulus just announced to their 15,000 beta testers that they were adding some new features (a “grid” to watch multiple channels at once) but that is not why I’m posting about them. Instead, it’s that you, yes you, can start and run your own TV channel and Mogulus is your very own TV Studio online.
This startup is also going to be broadcasting the NewTeeVee Live event on November 14th for free and using it as a pre-launch (end of November is launch) proving ground for what they’re offering.
Why is this a big deal and why should you care?
One reason is that you’ll be able to “attend” the NewTeeVee Live event as stated on their blog, “For those of you who can’t make it, the conference will be broadcast live by Mogulus, who prepared the promo below to give you a flavor of what’s to come. Joyce Kim of The GigaOM Show will be hosting the Mogulus broadcast with live hallway interviews.” More here.
Besides free attendance to this event, it also means that you have an atypically intriguing method of delivering high value video content with Mogulus and are able to connect and switch live to multiple, geographically disbursed people (who can be “talent” or content experts on webcams), switch to video feeds from rooms or events with a live television-like production method, and then run recorded videos 24/7 afterwards. The Mogulus player — though skinned with what I think is their default butt-ugly gray or even their special NewTeeVee orange like you see above — can be embedded anywhere (and I hope they provide different skins upon launch!).
Take a peek at the Mogulus video after the jump and watch the whole thing as you’ll get to the good stuff how Mogulus works, etc.) about halfway through.
If you’re not amazed by the following video and the techniques this young math teacher used within it, I’d suggest you put down your mouse, back away from your computer and finish reading your current issue of the National Enquirer.
On the other hand, if you’d like to see what powerful tools can offer someone with a vision and the passion to deliver an end-of-summer video for friends, then take a look at what Dan Meyer has built (and his post…peek at the comments too…and I came to this by way of Christian Long’s post). It’s just over six minutes and could use just a tad bit of tightening up with editing, but the point of the video is not OUR general amusement…it was created for the people IN the video so consider that as you watch it.
I’m pretty sure how he accomplished each of the effects and I can only imagine how many hours he invested in this effort. 75-100 hours is probably in the ballpark for time. Wow.