The word from an increasing number of voices in the semiconductor industry is that shortages and production problems are here to stay for the foreseeable future. Xbox head Phil Hughes is the latest executive to sound off on the problem, and he agrees with assessments from Lisa Su and Jensen Huang:
“I think it’s probably too isolated to talk about it as just a chip problem,” Hughes told TheWrap. “When I think about, what does it mean to get the parts necessary to build a console today, and then get it to the markets where the demand is, there are multiple kind of pinch points in that process. And I think regretfully it’s going to be with us for months and months, definitely through the end of this calendar year and into the next calendar year.”
The pinch points Hughes is talking about are scattered everywhere across the global economy. A story earlier this month at The Atlantic discussed the supply chain issues in more detail, including the enormous spike in shipping costs. Shipping containers that would have cost $2000 – $5000 to move before the pandemic now run $30,000, with a $20,000 option if you can deal with an uncertain delivery date measured in months.
Microsoft’s Xbox Series X is an impressive piece of kit, but it’s also difficult to find at MSRP.
Controlling Delta infections in nations across the world has required the closure of high volume ports and shipping facilities. This has increased wait times and slowed the distribution of everything from shoes to CPUs. In ordinary times, passenger jets help alleviate some demand by carrying cargo on international domestic flights, but there have been fewer of these flights since the pandemic began.
These supply chain problems are exacerbating shortages by disrupting manufacturers’ ability to build hardware at a time when demand has been higher than ever. Toshiba has already given notice that it will not be able to meet demand for certain types of power semiconductor circuitry until 2022, or even 2023.
These are the kinds of pinch points Hughes is referring to, and solving them isn’t going to be a fast process. If there’s a consolation prize in all this, it’s that Xbox Series X prices have come down enough on the gray market that they’re no longer an utterly terrible deal. A quick check of eBay’s recently sold auctions shows a reasonable number of systems selling for under $700. The PlayStation 5 is still running hotter, at $700 – $800. ExtremeTech does not recommend paying this much over MSRP for either console, but prices are at least a little better than earlier this year, when $800+ was normal for both systems.
Demand is unlikely to drop as we head into the holiday season, so hopefully 2022 will bring better tidings.
For thesecondtime in a month, Facebook has been found creeping past the boundary of what’s reasonably healthy for kids. The company has been planning versions of its product adapted for “tweens” ages 10 through 12 for the last three years, including a kids’ edition of Instagram and Facebook Messenger features that “leverages” childrens’ playdates, according to a newfeatureby theWall Street Journal.
The idea of an “Instagram Kids” has floated around for quite some time; its proposed features including hefty parental controls, age-appropriate content, and a lack of advertising. And in a time of generalized social strife, nothing has seemingly united human beings more than a distaste for the product: parents, mental health experts, child welfare advocates, law enforcement, and lawmakers have allopposedthe idea on various grounds including impacted cognitive development and child safety. The only party that remains interested in the product is, well, Facebook.
“Why do we care about tweens? They are a valuable but untapped audience,” theWSJquotes from an internal Facebook document. “Valuable” is the insidious keyword—it reveals Facebook’s already-somewhat-obvious true intentions, which is to turn a profit even at the expense of its own audience’s well-being. The company has reportedly researched how it can turn playdates into opportunities for youngsters to use the app, both while coordinating said playdates and while already spending time with one another in person. (Facebook’s internal documents don’t seem to elaborate on how this latter part would work.)
A snippet of a 2020 Facebook presentation on the tween audience. (Image: WSJ)
Facebook isn’t the first online platform to face fallout after targeting literal children. In 2019, the ever-popular short video app TikTok wasfound guiltyof violating federal children’s privacy laws, resulting in a $5.7 million fine by the Federal Trade Commission. The company introduced unique privacy settings and defaults for minors earlier this year, though the app still seems todirectsexual and drug-related content to users under 18. But Facebook has dabbled in this space before, in particular with its launch of Messenger Kids in 2017. But as TikTok and Snapchat lure in younger mobile device users, Facebook feels left in the dust, and as a result it’s desperate to grab kids’ attention before it’s too late.
“Kids are getting on the internet as young as six years old. We can’t ignore this . . . Imagine a Facebook experience designed for youth,” Facebook stated in a document from 2018, clearly having forgotten to ask themselves whether such an “experience” was at all appropriate for the audience in question. The early adoption of social media by teenagers has repeatedly been found to berisky, as confirmed by Facebook through its conversations with kids about its plotted features. One tween told the company they didn’t know how to get a “perfect picture” like “you need to post.”
Following immense backlash from all corners of the Internet, Facebook has decided to step back from its plans to release kid-focused features and products—for now. “We’re pausing ‘Instagram Kids,’” Instagram head Adam Mosseriannouncedvia Twitter earlier this week. “This was a tough decision. I still think building this experience is the right thing to do, but we want to take more time to speak with parents and experts working out how to get this right.”
Genetic diseases are a compelling target for viral gene therapy. One condition that scientists are investigating to see if they can treat with gene therapy is a rare genetic disease called Leber congenital amaurosis, or LCA. LCA is a progressive condition that disables critical cells within the retina. The damage begins at birth: it eventually robs patients of central vision and color perception, often rendering them legally blind. But there may be another way. On Wednesday, researchers presented evidence from a breakthrough gene-editing experiment that restored some color vision to patients with LCA vision loss.
CRISPR is already under investigation as a gene therapy for blood disorders like sickle cell disease and beta-thalassemia. It may well have other uses, such as treating cancer by editing mutated DNA. But the process is not without its hurdles. Treatments for blood disorders like these involve taking cells from the patient’s body, changing them in vitro in the lab, and then re-infusing them back into the patient’s body. That works great for blood, which you can take out, filter, and put back in with relatively few consequences.
But because LCA is a disease of the retina, you can’t just take out cells and then infuse them back in. The retina is a delicate, multilayered membrane that resents any disturbance. The eye also has a system of physical defenses not unlike the blood-brain barrier. Furthermore, the immune system sometimes responds with extreme prejudice to eye injuries or infections, to the point of causing an actual autoimmune disease where the body attacks its own eyes. How, then, could researchers get the CRISPR treatment into the retina, past the body’s ferocious defenses and without further damage?
The team settled on a viral vector. They chose a symptom-free virus, one that preferred to insert itself into the host’s DNA in the one place in the genome LCA patients need fixed. Then the team used CRISPR to insert an altered DNA sequence into the virus, one that would hopefully correct the LCA mutation and revitalize those disabled retinal cells. But the question still remained: if the vector can’t get into the eye through the bloodstream, how could they get the treatment to its target? Corneal transplants hold a clue. The same things that make the cornea so clear and hard are what make it a good candidate for an injection site, because it can benefit from the immune privilege of the eye.
Seven very brave people with LCA volunteered for the experiment. To receive the treatment, the volunteers had to go under general anaesthesia. Then researchers injected a tiny amount of the solution containing the viral vector directly into the anterior chamber of one eye. They only did the experiment on one of each patient’s eyes, in case something went wrong.
Participants are still under observation. Some patients saw no benefit, and for some it’s still too soon to tell whether the treatment has even “taken.” But for others, the results are clear and bright.
One of the two participants with the best results is Carlene Knight, 55, who lives near Portland, Oregon. Knight’s vision is “much clearer and brighter” after receiving the experimental treatment. Her experience suggests even mundane improvements count. When she dropped a fork on her kitchen floor, she said, “I just leaned down to pick it up and didn’t know where it was and just saw it on the floor. It’s very cool.”
Colors, too, have begun to return. “I’ve always loved colors. Since I was a kid it’s one of those things I could enjoy with just a small amount of vision. But now I realize how much brighter they were as a kid because I can see them a lot more brilliantly now,” she says. “It’s just amazing.”
So to celebrate, she dyed her hair her favorite color: green. “It’s kind of fun to see,” she laughed, surrounded by lush green plants and wearing a shirt that matched her hair.
Franny White/OHSU, via NPR
Another participant with standout results is Michael Kalberer, 43, of Long Island. He realized the treatment had worked when he noticed the color of a red car driving past. But the high point of the process came at his cousin’s wedding, when he realized he could see the color of the flashing lights on the dance floor. “I could see the DJ’s strobe lights change color and identify them to my cousins who were dancing with me,” said Kalberer. “That was a very, very fun joyous moment.”
“We’re thrilled about this,” enthused Dr. Eric Pierce, who’s helping run the experiment testing the approach. Pierce is the director of the ocular genomics institute at Massachusetts Eye & Ear, and also a professor of ophthalmology at Harvard Medical School. “We’re thrilled to see early signs of efficacy because that means gene editing is working,” he said. “This is the first time we’re having evidence that gene editing is functioning inside somebody and it’s improving — in this case — their visual function.”
The restoration of sight made possible by this treatment doesn’t mean these participants now have normal vision. It’s not a silver bullet. But they can see better, and it’s a huge quality-of-life buff with (as yet) no known side effects.
Next steps include larger and longer-running trials — but the results of this experiment were so good that the group has already been greenlit to proceed. Once they have enough data to ensure good outcomes, the researchers plan to begin offering the treatment to children, who stand to benefit most.
For those who have already benefited from this treatment, though, it’s all bonus from here. Kalberer told NPR, “I’m just incredibly honored and privileged to be part of this, and very, very excited to literally, hopefully, see what comes in the future.”
Nobody knows what Beethoven had on his mind when he died, nor the plans he had for his unfinished work. There’s been a lot of speculation, but the arrow of time flies in only one direction, and it’s tough to pick the brain of a dead guy. That is, unless you plug all his work into an AI to figure out his style. Music historians, composers, and computer scientists have collaborated to produce a “finished version” of composer Ludwig von Beethoven’s unfinished 10th Symphony. The first public performance of the piece is scheduled for October 9.
Beethoven is widely known to have intended to write a 10th Symphony, but he died with the work scarcely begun. Discussions of what he might have written continue into the present day. Up until now, the 10th Symphony was known only from fragments. A musicologist named Barry Cooper assembled Symphony No. 10’s first movement from these fragments back in the late 1980s, but this new project went farther and attempted to complete the second, third, and fourth movements using AI tools.
This new project is guaranteed to stoke controversy. In researching Cooper’s original effort, we came across pieces of an acrid exchange between Cooper and another musicologist, Robert Winter, from the mid-1980s. Winter wrote: “His [Cooper’s] result must be compared to a standard established by Beethoven. That gap, I maintain, is demonstrably so enormous as to render Cooper’s ‘realization and completion’ misleading rather than illuminating.” If a human-completed effort can be that controversial, you can bet an AI project will drive further controversy.
(Also, I just want to point out that it’s remarkably appropriate that an AI helped finish Beethoven’s Symphony No 10, considering we compute in binary. Get it? One-zero? …I’ll see myself out.)
Ludwig van Beethoven, as painted by Joseph Karl Stieler. Public domain.
There have been previous efforts to build AIs capable of writing music in the style of long-dead composers, but the previous major effort, DeepBach, didn’t tackle a challenge nearly as complex as this. An extensive post by Ahmed Elgammal at The Conversation details his work as part of a team to (re)create the symphony. (No one has written a Bach AI called “I’ll be Bach?” – Ed)
“Most fundamentally,” Elgammal writes, “We needed to figure out how to take a short phrase, or even just a motif, and use it to develop a longer, more complicated musical structure, just as Beethoven would have done. For example, the machine had to learn how Beethoven constructed the Fifth Symphony out of a basic four-note motif.”
A section of the audio is available below:
The article steps through how the team painstakingly trained the AI on music samples and demoed it for musicians and experts, asking the listeners if they could identify where the AI-generated components began and ended. They could not.
An effort like this is going to be controversial in certain quarters. Some will question whether words like “finished” or “completed” should be used, though we don’t have any ready alternatives. There will always be an asterisk beside such efforts. We can’t re-create the artist, but we can use their known patterns and idiosyncrasies to create a composition in their style. Where you draw the line on whether it finishes the symphony is a matter of personal judgment — but then, music is a very personal thing.
Machine learning offers the tantalizing prospect that we might slip the surly bonds of individual interpretation and turn the task over to an algorithm. The idea of programmatically assessing Beethoven at minute levels of detail suggests that a sufficiently advanced AI might one day write a piece of original music indistinguishable from Beethoven’s own. What if an artist or composer had an AI assistant trained closely on their work? Creators often leave behind an unfinished painting, sonnet, novel, or musical composition. With such a tool, artists need not fear that they might die with work yet undone.
The COVID-19 pandemic has caused widespread economic disruption and forced changes to the way many of us work and learn, but it’s been a boon for Chromebook sales. A new report from TrendForce suggests the good times are coming to an end for those Google-powered laptops. Windows demand should be buoyed by Windows 11’s release, but the report predicts that Chromebook sales will continue to drop through next year.
As the pandemic gained steam in early 2020, many families were forced to quarantine, both working and learning from home. For any household without enough computers, Chromebooks were the perfect option. While Chrome OS doesn’t have as many features or software options as Windows, they’re cheap and work just fine for accessing web content and services. The pandemic is far from over, but market dynamics are beginning to change. Chromebook demand saw a substantial decrease over the summer, dropping 50 percent in July. Samsung and HP, both of which have a large number of Chromebooks in their portfolios, have felt the brunt of the dip. TrendForce expects a 10-20 percent drop there.
TrendForce points the finger squarely at increasing vaccination rates in North America, Europe, and Japan. As more people get the jab, they’re returning to offices and schools — in many US jurisdictions, there has also been a concerted push to get grade school kids back into classrooms, regardless of the potential impact on the spread of COVID-19. As people return to some semblance of normal, Chromebooks are sitting on the shelves longer.
It’s uncertain what will happen in the last quarter of the year — if the pandemic worsens again, at-home work and school could prop up Chromebooks. Plus, the FCC released the $7.17 billion Emergency Connectivity Fund over the summer, which will fund the purchase of computers, tablets, and network equipment by schools and libraries. That could help to prop up sales throughout the remainder of the year.
In a best-case scenario, laptop shipments in Q4 could remain steady from Q3, but regardless, 2022 is expected to drop further. TrendForce expects a 7.3 percent drop in laptop shipments next year as the pandemic continues to abate. That’s still around 220 million units, which is several times higher than the 60 million units sold in 2019 before the pandemic.
AMD has announced a major new efficiency initiative that’s intended to build off its previous 25×20 project. The company will now work to deliver a 30x improvement in energy efficiency in AI and high performance computing overall, relative to today’s CPU and GPU accelerators.
AMD’s blog post doesn’t go into much detail on how they intend to achieve this savings, beyond some references to a pressing need to lower the cost of compute in data centers, and the rapid growth of AI. Many of the companies working on large AI clusters have stated that they have halted or slowed their buildouts due to power and cooling requirements. There’s a focus throughout the industry on improving the computational efficiency of AI through a variety of methods, both in hardware and software.
We can hazard a few guesses on how AMD will hit these goals based on its known IP development. First, it would be helpful to know which GPU architecture AMD is comparing against. The blog post and PR only mention “Radeon Instinct,” but there are multiple GPU architectures in the Radeon Instinct family. If AMD is using one of its older GCN parts for comparison, the 30x by 2025 is easier to hit.
There are rumors that AMD’s Zen 4 architecture will support AVX-512, which suggests another avenue by which AMD might boost its AI performance and overall efficiency. AMD has a decades-long history of adding support for Intel extensions at roughly n-1 extension sets, or when the Intel extensions have been reserved to Intel products for a significant period of time.
By the time Zen 4 presumably appears in late 2022 with rumored AVX-512 support , Intel should have Sapphire Rapids with support for AMX (Advanced matriX Extensions) built-in. AMD might have added AMX support or be preparing to add it by 2025. It’s not clear exactly how much efficiency AMD would gain from adopting these new SIMD sets, but we can assume that a fair percent of the company’s total improvement will come from new instruction support — via AVX-512, if nothing else.
Next up, there’s the potential performance advantage of AMD’s V-Cache. Caching data generally improves the performance of many workloads, but it’s possible that AMD has specific plans in mind for how it can leverage large L3 caches to boost AI power efficiency in the future. Today, CPUs can expect to spend as much or more power moving data as they do computing on it. Larger caches and better caching algorithms could boost AI execution efficiency by reducing the amount of data that needs to move on and off a given CPU. Improvements to AMD’s ROCm software translation layer could also yield some significant advances in AI power efficiency.
By 2025, we should be seeing the fruits of AMD’s Xilinx purchase/merger and manufacturers like TSMC should be pushing into 2nm and beyond. While manufacturing and lithography improvements do not improve power consumption as they once did, we’re still talking about several generations of successive improvements relative to 7nm. AMD tends to lag the leading edge by a couple of years these days, but 2nm isn’t out of the question by the end of 2025. The cumulative improvements from three node shrinks — 5nm, 3nm, and presumably 2nm — should be at least as big as the gains from 16nm to 7nm and might be a bit larger.
What makes AMD’s claim a bit eyebrow raising is the position the company is in relative to its previous 25×20 plan. When AMD set its 25×20 goal, it was targeting a 25x improvement in power consumption over six years, based on where the company found itself back in 2014. This was during the Bulldozer era, when power efficiency wasn’t exactly AMD’s strongest suit. AMD’s power efficiency in 2020 was much stronger, even if the company starts with Zen 2 + Vega as opposed to Zen 3 + CDNA. Delivering such a high rate of improvement is going to be tricky.
AMD undoubtedly means its target, but keep in mind that these targets haven’t stopped the absolute amount of power consumed in computing from trending steadily upwards. One of the most fundamental methods of improving performance, regardless of any underlying efficiency trend, is throwing more transistors and electricity at a problem.
Ultimately, the question for AMD isn’t whether it can deliver a 2x, 5x, or 30x increase in energy efficiency by 2025 — it’s how well the company’s CPUs will compete against the ARM and x86 CPUs that’ll be in market by then.
Today you can save over $400 on a gaming desktop from Dell that comes equipped with an Intel Core i5 processor and an Nvidia GeForce RTX 3060 graphics card. This makes the system well suited for gaming at 1080p and 2K resolutions.
Dell XPS 8940 SE Intel Core i5-11400 Gaming Desktop w/ Nvidia GeForce RTX 3060 GPU, 8GB DDR4 RAM and 1TB HDD for $999.99 from Dell (List price $1,429.98)
Samsung 970 Evo Plus 2TB M.2 NVMe SSD for $246.94 from Amazon (List price $499.99)
Dell S2721QS 27-Inch 4K IPS Monitor for $319.99 from Dell (List price $539.99)
Echo Show 5 5.5-Inch Smart Display for $44.99 from Amazon (List price $79.99)
Dell’s new XPS 8940 features an updated design and it comes loaded with strong processing hardware that’s able to tackle just about any task you throw at it. The high-end Intel Core i5-11400 with its eight CPU cores is well suited for running numerous applications at the same time. As the system also has an Nvidia GeForce RTX 3060 graphics card, the system’s is able to run games with high settings fairly well, making it a fitting machine for gaming and work. Currently you can get one of these systems from Dell marked down from $1,429.98 to just $999.99.
Reading data at 3,500MB/s, this SSD hits the limits of what the M.2 interface is capable of when connected using PCI-E 3.0 lanes. The drive was built using Samsung’s V-NAND 3-bit MLC NAND, which offers excellent performance. The drive is also rated to last for up to 1.5 million hours before failing, and it is marked down from $499.99 to $246.94 on Amazon.
Dell’s S2721QS is a 27-inch monitor that sports a 4K IPS panel with HDR and FreeSync support. The monitor can also be used for detailed video editing work as it covers 99 percent of the sRGB color gamut, and it also has a built-in pair of 3W speakers. Currently Dell is selling these monitors marked down from $539.99 to $319.99.
Amazon’s Echo Show 5 features an 5.5-inch HD display and is compatible with a wide range of Amazon- and Alexa-enabled services. It can work as a display for home security devices like Ring’s video doorbell, and it can be used for calling people and numerous other functions. I personally like to use mine for watching YouTube videos before bed and for listening to music. The Echo Show 5 typically costs $79.99, but you buy it now from Amazon marked down to $44.99.
Note:Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Working as an IT consultant or engineer with Amazon Web Services has the opportunity to be a lucrative career path, but requires a lot of specialist information and training in order to for you to be successful. It makes sense for anyone working in related industries – or for those who want to do so – to ensure that they keep up on the latest information, training, and certifications to ensure that they’re doing the best job they can every single day. New information and best practices for information technology are always changing, so it’s best to ensure that you’re constantly learning what’s new and maintaining a high level of understanding regarding this information and training.
Right now, you can get lifetime access to The Premier All AWS Certification Training Bundle for the further reduced price of $11.40 when you use the coupon code VIP40 when you check out, offering a 94 percent discount off the full ordinary purchase price of the training bundle. Refine and build your knowledge with this essential bundle and keep on the cutting edge in your career.
The bundle offers lifetime access to seven exam simulation courses, including the essential areas of Certified Cloud Practitioner, Solutions Architect, Developer Associate, SysOps Administrator, Certified Data Analytics – Specialty, Certified Security – Specialty, and Certified Advanced Networking – Specialty. These are all courses you can take at your own pace and whenever you need them. The average rating of the included courses is a high 4.6 stars out of five, showing their value to AWS professionals. The bundle will help you to learn about cloud computing and the AWS Cloud ecosystem, design, develop, & deploy cloud-based solutions, and master AWS operational best practices.
The Premier All AWS Certification Training Bundle is now on sale for $11.40 when you use the coupon code VIP40 when you check out, meaning you can keep up to date on your AWS certifications and explore the rapidly expanding world of AWS and cloud computing.
Note:Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Apple has just released the updated iPad Mini, a device that packs most of what makes the larger iPads so popular into a diminutive frame. This tablet has an 8.3-inch 60Hz LCD display with narrow bezels and impressive brightness for a tablet. It also exhibits a phenomenon dubbed “jelly scrolling,” which some owners are unable to ignore now that they’ve seen it. Apple, however, says the screen is working as intended.
Jelly scrolling has popped up from time to time, usually when an OEM mounts the display on a phone oddly. For example, OnePlus designed the OnePlus 5 with an upside-down screen. This is a problem because LCDs refresh from top to bottom. If you flip it around, the varying refresh across the panel makes things look weirdly “liquid” as they move around.
The iPad Mini does not have an upside-down LCD panel, but the jelly scrolling effect is obvious when you hold the device in portrait orientation. As you can see in the video below, the screen refreshes side-to-side when you hold it in portrait. You can also see a clear dividing line down the middle of the LCD. That makes content look like it’s stretching as you scroll (almost everything on mobile devices scrolls vertically), but the effect is absent in landscape orientation because the screen refreshes top to bottom as intended.
Here is is slow-mo video of scrolling on the iPad Min i slowed down EVEN MORE in a frame-by-frame step through. Notice how the right moves up faster than the left.
In normal usage you barely see it, but every now and then it become noticeable. In landscape it goes away entirely pic.twitter.com/iq9LGJzsDI
In response to complaints, Apple said this is normal. Because LCDs refresh line-by-line, you should expect jelly scrolling. We disagree. Most devices don’t exhibit noticeably jelly scrolling, even though they all work the same on an engineering level. Apple’s other iPads with 60Hz screens don’t attract the same complaints, including the new $329 base model iPad. For $500, you’d expect the Mini to be at least as good, but it’s unclear why the distortion is more prominent here. Perhaps the smaller footprint makes the stretching more obvious, or Apple might have gone with a cheaper display on the Mini to save a few bucks. Ars Technica reported Apple’s initial response and maintains that this problem is uniquely visible on the new Mini.
When Apple says this is nothing to worry about, what it really means is there’s nothing forAppleto worry about. Since the iPad Mini display is working as intended, it won’t have to make any changes or recall units. If you’ve got a Mini and the jelly scrolling is already getting on your nerves, it might be time to return it — it won’t get better over time. There are rumors Apple is planning to make landscape the “correct” orientation when it releases new Pro models. So, perhaps the Mini is just ahead of the curve.
With only a few days left until the release of the long-awaited (and debated) Windows 11, it’s been revealed that most Windows users aren’t even aware that the new operating system exists. According to a recentsurveyby Savings.com, 62 percent of users surveyed indicated that they weren’t aware Windows 11 was on its way.
In an effort to “gauge awareness and excitement for Windows 11,” Savings.com surveyed 1,042 current Windows users regarding their knowledge of and willingness to upgrade to the new OS. A hair under 40 percent of users said they knew of Windows 11’s pending release, with about the same number ofthoserespondents saying they’d be willing to upgrade once their time came. Nearly two thirds of all users surveyed stated they didn’t know if their devices were compatible with Windows 11, especially since the OS is incompatible with older CPUs. (Luckily thePC Health Checkapp is capable of telling users whether Windows 11 is an option with their current hardware.) Older users were also more likely to indicate they were aware of the update, with only 28 percent of respondents between ages 18 and 24 saying they were aware of Windows 11—opposed to 56 percent of respondents over the age of 55.
(Photo: Microsoft)
Forty percent of users isn’t exactly a great awareness rate, especially you compare Windows with other major OS releases, like Apple’s iOS and macOS, which iPhone and Mac users tend to eagerly await. This raises the question: Why isn’t Microsoft more heavily advertising Windows 11 to PC users? While we’ve discussed in the past how Windows 11 doesn’t seem to be for any one audience in particular (likely due in part to its more stringent hardware requirements), one would assume the company would be a little more proud of its latest release, especially as new devices come preloaded with the OS beginning this fall.
Windows 11, due to begin rolling out on October 5, rides the tailwind of theoverall successfulWindows 10—and therefore has some big shoes to fill. It’s expected to be a worker-friendly update, with added productivity features like “Snap” multitasking layouts and Microsoft Teams integration, as well as virtual desktop support and the reintroduction of desktop widgets. Whether a decent chunk of Windows users will take advantage of the upgrade as opposed to buying new, however, is clearly still up in the air.
Fans of the iconic, beloved, heart-wrenching Babylon 5 series will be interested to hear that it’s getting a “from-the-ground-up” reboot. And it’s not just some enthusiastic rando in charge, either: J. Michael Straczynski is in charge. He’s writing the pilot, and he’ll be running the show.
According to an early synopsis from Warner Bros., in the rebooted series, “John Sheridan, an Earthforce officer with a mysterious background, is assigned to Babylon 5, a five-mile-long space station in neutral space, a port of call for travelers, smugglers, corporate explorers and alien diplomats at a time of uneasy peace and the constant threat of war. His arrival triggers a destiny beyond anything he could have imagined, as an exploratory Earth company accidentally triggers a conflict with a civilization a million years ahead of us, putting Sheridan and the rest of the B5 crew in the line of fire as the last, best hope for the survival of the human race.”
Currently none of the original cast are involved with the project. Naturally there’s some speculation about whether that might change, but JMS seems set on the course he’s chosen for the show. “To those asking why not just do a continuation, for a network series like this, it can’t be done because over half our cast are still stubbornly on the other side of the Rim,” Straczynski explained in a thread. “How do you tell the continuing story of our original Londo without the original Vir? Or G’Kar? How do you tell Sheridan’s story without Delenn? Or the story of B5 without Franklin? Garibaldi? Zack?”
Straczynski then invoked Heraclitus to explain his creative direction: “You cannot step in the same river twice, for the river has changed, and you have changed.” Between B5 and now, Straczynski has worked on a ton of other creative projects. The A/V storytelling tools available today are not the same as they were then. CGI, for one, has advanced tremendously since B5’s pioneering use of the tech the first time around. Furthermore, the reboot will be produced by Warner Bros. Television. “The great news is that the new B5 is for an actual *network* with proper budgets and PR.,” Straczynski pointed out on Twitter. “B5 originally had a ridiculously tiny budget, and aired on syndicated PTEN, which most folks never heard of and could only be found with a Ouija board and a hunting dog.”
JMS is also clearly aware of the hazards of show cancellation, which often happens in the middle of a season, with little chance to resolve plot details. The original B5 fought every season to be renewed, so Straczynski built in what he called trap doors and detours, allowing him to roll with problems as they happened. He’s keeping the details to himself until the big reveal, as usual. But this time, he’s confident of his grip on the helm.
So the show won’t be telling the same story in the same way as last time, because you can’t step in the same river twice. “There would be no fun and no surprises,” Straczynski said. Instead, he means to work in the same universe, but with “a ton of new, challenging ideas,” in order to create something “fresh yet familiar.”
When I first watched B5 way back when, the person who recommended it told me: “None of the characters are quite the same person at the end that you thought they were in the beginning.” All the main characters changed, in ways subtle and profound, over the course of the series. What I didn’t know going in is that I, too, would be touched and changed by their stories. Perhaps this new trip through the B5 universe will remind us again how to dance.
Great minds have spent many years trying to puzzle out the deep history of our solar system. Far from resembling Plato’s perfect spheres, the current thinking is that while Earth and Venus were clearing their orbits, the Moon was formed by a Mars-sized impactor named Theia smashing into Earth. But that may not be the whole story. In a pair of reports in The Planetary Science Journal, a team of scientists have published a new “collision chain” model for the Great Impact hypothesis of our rocky inner planets’ formation, and it challenges our narrative of how Venus, Earth and the Moon spent their youth.
The authors demonstrate the idea that great impacts may not be the efficient mergers we believe them to be, explained team lead Erik Asphaug. “We find that most giant impacts, even relatively ‘slow’ ones, are hit-and-runs. This means that for two planets to merge, you usually first have to slow them down in a hit-and-run collision,” Asphaug said. “To think of giant impacts, for instance the formation of the moon, as a singular event is probably wrong. More likely it took two collisions in a row.”
To demonstrate their ideas, the authors focus on Venus and Earth. Alexandre Emsenhuber, who worked on the first of the two papers during a postdoctoral fellowship, says in the report that the young Earth would have served as a kind of kinetic shield that slowed down inbound impactors. Earth would have robbed them of their momentum, slowing them down. “We think that during solar system formation, the early Earth acted like a vanguard for Venus,” said Emsenhuber.
To explain the vanguard effect, Emsenhuber uses the analogy of a bouncing ball. A body coming in from the outer solar system is like a ball bouncing down a set of stairs, with each bounce representing a collision with another body.
“Along the way, the ball loses energy, and you’ll find it will always bounce downstairs, never upstairs,” he said. “Because of that, the body cannot leave the inner solar system anymore. You generally only go downstairs, toward Venus, and an impactor that collides with Venus is pretty happy staying in the inner solar system, so at some point it is going to hit Venus again.”
A second paper, published in tandem with the first, uses machine learning to build predictive models from 3D simulations of giant impacts. The team tested their predictive powers on both hit-and-run and merging collisions, to simulate terrestrial planet formation over a period of 100 million years. The authors further propose and demonstrate their hit-and-run-return scenario on the formation of the Moon.
“The standard model for the moon requires a very slow collision, relatively speaking,” said Asphaug, “and it creates a moon that is composed mostly of the impacting planet, not the proto-Earth, which is a major problem since the moon has an isotopic chemistry almost identical to Earth.”
In the team’s new scenario, a protoplanet roughly the size of Mars hits the Earth, just like the standard model. Instead of the Earth simply accreting Theia in one shot, the impactor bounces off in a big sloshy mess. It returns in about a million years for another giant impact, moving slower this time — and that second pass could be the key to better aligning our models with what we see.
“The double impact mixes things up much more than a single event,” Asphaug said, “which could explain the isotopic similarity of Earth and moon, and also how the second, slow, merging collision would have happened in the first place.”
It may also explain the different chemical compositions of Earth and Venus. Because impactors that hit Earth with a glancing blow would have been flung away deeper into the Sun’s gravity well, Asphaug added, “Earth would have accreted most of its material from collisions that were head-on hits, or else slower than those experienced by Venus. Collisions into the Earth that were more oblique and higher velocity would have preferentially ended up on Venus.”
“You would think that Earth is made up more of material from the outer system because it is closer to the outer solar system than Venus. But actually, with Earth in this vanguard role, it makes it actually more likely for Venus to accrete outer solar system material.”
Big tech companies like Amazon are under increasing scrutiny for the way they use (and sometimes misuse) personal data. Amazon did not take that into account when it announced Astro, its first-even home robot that can trundle around your house, record video, deliver messages, and more. Anyone who actually wants an Amazon robot monitoring their house will have to pay at least $1,000 for the privilege.
Astro looks like a small rounded block with two 12-inch wheels toward the front. There’s a third wheel under the rear end to keep the robot stable. Up front, there’s a 10-inch screen that sits on a short arm. The display can show you content, but at idle it’s the robot’s virtual face. Amazon says it looked at movies, TV, games, and animation to give Astro a bit of personality. The way the eyes move and the expressive tones are supposed to put nearby humans at ease. Exactly how much at ease will probably depend on how you feel about Amazon mapping your house.
In order to get around, Astro needs to know where it is. Amazon developed a technology called Intelligent Motion, which makes use of simultaneous location and mapping (SLAM). So, the robot uses its sensors to build a 3D map of the space around it. Intelligent Motion devises several hundred potential paths to the goal and then scores them before choosing the best one. Not only does it learn the layout of your home, but it can cope with changes like an open drawer, a pile of dirty laundry, or a dog laying on the floor.
At its most basic level, Astro does all the things an Alexa-powered smart display can do. You can control smart home devices, view camera feeds, and check the weather. The robot has its own array of cameras, so it can even patrol the house when you’re away. Amazon actually added a telescoping camera to the robot that can see objects that it could not otherwise reach. For example, Astro could check to make sure you turned off the oven. The cameras are able to recognize members of the family, allowing you to drop items in Astro’s cargo bin for delivery. Just tell it who to find, and off it goes.
There are certainly some neat use cases here, but Astro is very rudimentary as far as robots go. It can’t traverse stairs, and the lack of arms means it won’t be able to carry anything outside that cargo bin. It won’t clean your floors, either. It’s a lot like the other home robots we’ve seen announced, but few of them have even gone on sale. Amazon is taking it slow with Astro — customers who apply to order the robot will have the chance to do so later this year for $999.99. Later, the price will go up to $1,499, but it will never be a mass-produced product.
Bad news for anyone hoping for a quick end to the semiconductor shortage. AMD’s CEO Lisa Su is the latest tech CEO to suggest we won’t see a recovery until the back half of next year.
Semiconductor sales will remain “likely tight”, Su said at Code Conference 2021. She predicts that supply will start to improve next year as new manufacturing facilities come online, but that supply and demand may not equalize for roughly another year.
One of the slightly frustrating aspects of the ongoing semiconductor shortage is that it’s never very clear which product or market is being discussed in particular. Take the current situation in the PC retail channel. While AMD CPUs were difficult to come by earlier this year, chips like the Ryzen 7 5800X can currently be ordered for ~$395 at Amazon. That’s below the official MSRP of $449. This may be a flash in the pan, since AMD chips have been harder to find at various points in the year, but the CPU market isn’t showing much sign of shortages right now.
The GPU market, on the other hand, has been steadily moving in the wrong direction since the 4th of July, when prices hit their summer low of “just” 1.5x above MSRP. The situation was worsening earlier this month, though China’s recent decision to ban Bitcoin and crytocurrency mining could have an impact on GPU demand, at least in the short term.
At the same conference, Su downplayed AMD’s activity in the crypto market, noting it’s a small, volatile market and not one AMD intends to focus on. This is most likely true, but it doesn’t tell us much about how many of the company’s GPUs are flowing to miners anyway. It may be that AMD isn’t selling very many cards to miners because it isn’t shipping very many retail cards in the first place.
Nobody Knows How This Will End
Su’s predictions jive with those of Jen-Hsun Huang and from analyst firms, but it’s not clear how much value we should put on these predictions. The truth is, the semiconductor industry didn’t see the tech boom of late 2020 – 2021 coming. In the immediate aftermath of lockdowns, forecasts were predicting a slump once an early buying boom petered out.
The early buying boom never really wore off, and while there’s been signs of slower growth in some markets, there’s dispute about what to expect in the future. The semiconductor industry has committed to bringing new fabs online at breakneck speed. Capacity buildouts that start coming online 12-18 months from now could arrive just as market demand begins to fall off. GPU availability could improve dramatically now that China has banned cryptocurrencies, or it might not improve in the short term if other issues are clogging the pipes. There are reports that many companies have suffered recent shortages due to plant closures in Malaysia and elsewhere in Asia due to ongoing COVID-19 restrictions.
There are good reasons to think that things will be better 8-12 months from now. More factory capacity should be coming online. More people across the globe will have been vaccinated. Companies, individuals, and governments will have another year of experience dealing with the shortages and issues caused by the pandemic. Hopefully by this time next year, prices will be falling back towards normal.
But at the same time, we recommend readers take these predictions with several grains of salt rather than the usual one. A year ago, there were confident reports that the semiconductor shortage should ease by spring at the latest. Ampere celebrated its first birthday a few weeks ago, and we’re all still here. We’re not claiming that Lisa Su or any other tech executive is wrong, but the pandemic’s impact on the business cycle has been difficult for analysts to predict thus far.
If the 21st century has taught us any astronomical lessons, it’s that counting planets is hard. In 2000, there were nine planets, and now there are eight, but that might not last. Astronomers have been on the hunt for a theorized ninth planet in the extreme outer solar system, and now a study suggests there might be another planet out there. Unlike the massive (and completely hypothetical) Planet Nine, this one is believed to be a small, rocky world like Mars.
All the planetary uncertainty lies in the outer reaches of the solar system, beyond the orbit of Neptune. This is where Clyde Tombaugh discovered Pluto, which we thought was a planet for decades but has since been demoted to a dwarf planet. It was still a notable discovery as the first known representative of the Kuiper Belt, a ring of icy rocks that includes other big planetoids like Makemake and Eris.
To make sense of the mishmash of objects out there, scientists often turn to simulations that can search for signs of undiscovered planets. And there could be a lot to find out there. “It seems unlikely that nature created four giant planet cores, but then nothing else larger than dwarf planets in the outer solar system,” the study says.
The team found that models capable of closely approximating the current state of our solar system start with at least one extra planet, something vaguely Earth or Mars-like. This world was bounced around in the outer solar system by the intense gravity fields of Neptune and Saturn until it ended up in a far-out orbit where we can’t see it. It’s also possible the planet (or planets) were ejected from the solar system.
There may be a planet like Mars lurking in the Kuiper Belt.
We are only beginning to understand how solar systems like ours form, but it’s become apparent that planets don’t stay in the same orbit forever — they might migrate in or out depending on conditions and interactions with other objects. The simulations underpinning this study show that the four large gas giants may have rearranged as they gained mass. Jupiter moved inward, and the others moved outward. In about half the simulations, all the extra rocky planets were kicked out into interstellar space, but in the other half, one of them remained in the Kuiper Belt region.
The existence of this extra planet doesn’t preclude the existence of Planet Nine and vice versa. We won’t know which (if either) of them exist until someone can find them out there. The upcoming Vera Rubin Observatory might be able to see these objects when it begins surveying the sky in 2023. The ESA’s Gaia star mapping satellite might also see evidence of extra planets, but only if it distorts the light from distant stars as Gaia happens to be watching.
Experience your favorite games in a whole new way by purchasing new cutting edge gaming hardware. Today you can get a Dell Alienware gaming laptop with an AMD Ryzen 7 5800H CPU, an Nvidia RTX 3060 graphics chip and a blazing fast 165Hz display for just $1,299.
The new Dell Alienware M15 R5 features an updated thermal solution that helps you to get the most out of your hardware. The system has powerful components including an AMD Ryzen R7 5800H octa-core processor and an Nvidia GeForce RTX 3060 graphics chip that can run games fluidly on the notebooks 165Hz 1080p display. It also RGB LED keyboard, 16GB of RAM and a 256GB PCI-E SSD. This new system hasn’t been out long, but you can get one now from Dell marked down from $1,693.98 to just $1,299.99.
Sandisk built this external SSD with a large 1TB capacity and a rugged water-resistant exterior. The drive can transfer data at speeds of up to 1,050MB/s over USB 3.2 Gen2, which will far outstrip your typical USB flash drive and external HDD. You can currently buy this SSD marked down from its original retail price of $249.99 to $148.14.
Amazon’s Kindle Paperwhite is rated IPX8 waterproof, which means it can spend up to an hour submerged under two meters of water and continue to work. This model also has 8GB of storage space, giving you plenty of room to hold countless books. Right now it’s marked down from $129.99 to $79.99 from Amazon.
The Nighthawk R6700 is one of the most popular Wi-Fi routers on the market. It offers reliable performance with speeds of up to 1,750Mbps across two bands. It also has built-in USB ports for adding network resources. Right now it’s marked down from Newegg from $99.99 to $79.99.
Note:Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
With video calls now forming such an important part of the way we do business, having a little extra tech to help you look your best and do your best work on camera is vital. Everyone has seen that one colleague who has something problematic in their background, or is just plain not visible due to poor lighting or camera positioning – and it doesn’t help their case if they’re trying to make a point or sell you on something. Having a clear camera view with control over your background is essential to appearing professional and engaged in video conference settings, and it’s not something that you can let slip if you’re aiming to succeed.
One service that can help is XSplit VCam Premium, lifetime subscriptions to which are currently available for the further reduced price of $17.40 when you use the coupon code VIP40 when you check out, meaning you get 71 percent off the full ordinary purchase price of $60. Improve your presence on video calls and conferences with this hand app.
The app XSplit VCam offers high-tech background replacement, removal, and blurring that’s possible to use with any webcam – and doesn’t require the use of green screens, complex lighting setups, or lots of space. All you need to do is open an app, add XSplit VCam as your camera source, and replace, remove, or blur your background. The app is compatible with a range of streaming software such as Open Broadcaster Software (OBS), Steamlabs, and XSplit, making it ideal for podcasts, vlogs, talk shows, or other video projects. It also works with all major video chat applications and conferencing solutions. It even lets you use your mobile device as a webcam via the app extension XSplit Connect: Webcam.
Lifetime subscriptions to XSplit VCam Premium are now on sale for $17.40 when you use the coupon code VIP40 when you check out, making it quicker and easier for you to look and be better in video productions and calls.
Note:Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.