[ Reuters | Slashdot | BBC News ] [ Image Archive ] |
Slashdot
Trevor Milton, the pardoned founder of Nikola, is seeking $1 billion for AI-powered autonomous planes through a new venture called SyberJet. The Tech Buzz reports: "Autonomous planes will be 10 times harder than Nikola ever was," Milton told the Wall Street Journal in a rare interview. It's a remarkable admission from someone whose last venture collapsed under the weight of securities fraud charges after he overstated the capabilities of Nikola's electric and hydrogen-powered trucks. Milton was convicted in 2022 on three counts of fraud for misleading investors about Nikola's technology, including staging a video that made it appear a truck prototype was driving under its own power when it was actually rolling downhill. The conviction sent him to prison and turned Nikola into a cautionary tale about startup hype culture. His pardon, which came earlier this year, sparked immediate controversy in venture capital and legal circles. Now he's betting that AI and autonomous aviation represent a clean slate. SyberJet appears focused on developing artificial intelligence systems capable of piloting aircraft without human intervention - a technical challenge that's stumped even well-funded players like Boeing and Airbus. [...] Milton hasn't detailed SyberJet's technical approach or revealed who's backing the venture. The company's website remains sparse, and aviation industry sources say they haven't seen concrete demonstrations of the technology. That opacity echoes the early days of Nikola, when Milton made sweeping claims about revolutionary trucks that existed mostly in renderings and promotional videos. If you need a quick refresher on the Nikola saga, here's a timeline of key events: June, 2016: Nikola Motor Receives Over 7,000 Preorders Worth Over $2.3 Billion For Its Electric Truck December, 2016: Nikola Motor Company Reveals Hydrogen Fuel Cell Truck With Range of 1,200 Miles February, 2020: Nikola Motors Unveils Hybrid Fuel-Cell Concept Truck With 600-Mile Range June, 2020: Nikola Founder Exaggerated the Capability of His Debut Truck September, 2020: Nikola Motors Accused of Massive Fraud, Ocean of Lies September, 2020: Nikola Admits Prototype Was Rolling Downhill In Promo Video September, 2020: Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller October, 2020: Nikola Stock Falls 14 Percent After CEO Downplays Badger Truck Plans November, 2020: Nikola Stock Plunges As Company Cancels Badger Pickup Truck July, 2021: Nikola Founder Trevor Milton Indicted on Three Counts of Fraud December, 2021: EV Startup Nikola Agrees To $125 Million Settlement September, 2022: Nikola Founder Lied To Investors About Tech, Prosecutor Says in Fraud Trial Read more of this story at Slashdot. - FBI Is Buying Location Data To Track US Citizens, Director Confirms An anonymous reader quotes a report from TechCrunch: The FBI has resumed purchasing reams of Americans' data and location histories to aid federal investigations, the agency's director, Kash Patel, testified to lawmakers on Wednesday. This is the first time since 2023 that the FBI has confirmed it was buying access to people's data collected from data brokers, who source much of their information -- including location data -- from ordinary consumer phone apps and games, per Politico. At the time, then-FBI director Christopher Wray told senators that the agency had bought access to people's location data in the past but that it was not actively purchasing it. When asked by U.S. Senator Ron Wyden, Democrat of Oregon, if the FBI would commit to not buying Americans' location data, Patel said that the agency "uses all tools ... to do our mission." "We do purchase commercially available information that is consistent with the Constitution and the laws under the Electronic Communications Privacy Act -- and it has led to some valuable intelligence for us," Patel testified Wednesday. Wyden said buying information on Americans without obtaining a warrant was an "outrageous end-run around the Fourth Amendment," referring to the constitutional law that protects people in America from device searches and data seizures. Read more of this story at Slashdot. - Cloudflare Appeals Piracy Shield Fine, Hopes To Kill Italy's Site-Blocking Law Cloudflare is appealing a 14.2 million-euro fine from Italy for refusing to comply with its "Piracy Shield" law, which requires blocking access to websites on its 1.1.1.1 DNS service within 30 minutes. The company argues the system lacks oversight, risks widespread overblocking, and could undermine core Internet infrastructure. Ars Technica's Jon Brodkin reports: Piracy Shield is "a misguided Italian regulatory scheme designed to protect large rightsholder interests at the expense of the broader Internet," Cloudflare said in a blog post this week. "After Cloudflare resisted registering for Piracy Shield and challenged it in court, the Italian communications regulator, AGCOM, fined Cloudflare... We appealed that fine on March 8, and we continue to challenge the legality of Piracy Shield itself." Cloudflare called the fine of 14.2 million euros ($16.4 million) "staggering." AGCOM issued the penalty in January 2026, saying Cloudflare flouted requirements to disable DNS resolution of domain names and routing of traffic to IP addresses reported by copyright holders. Cloudflare had previously resisted a blocking order it received in February 2025, arguing that it would require installing a filter on DNS requests that would raise latency and negatively affect DNS resolution for sites that aren't subject to the dispute over piracy. Cloudflare co-founder and CEO Matthew Prince said that censoring the 1.1.1.1 DNS resolver would force the firm "not just to censor the content in Italy but globally." Piracy Shield was designed to combat pirated streams of live sports events, requiring network operators to block domain names and IP addresses within 30 minutes of receiving a copyright notification. Cloudflare said the fine should have been capped at 140,000 euros ($161,000), or 2 percent of its Italian earnings, but that "AGCOM calculated the fine based on our global revenue, resulting in a penalty nearly 100 times higher than the legal limit." Despite its complaints about the size of the fine, Cloudflare said the principles at stake "are even larger" than the financial penalty. "Piracy Shield is an unsupervised electronic portal through which an unidentified set of Italian media companies can submit websites and IP addresses that online service providers registered with Piracy Shield are then required to block within 30 minutes," Cloudflare said. Cloudflare is pushing for the law to be struck down, arguing that it is "incompatible with EU law, most notably the Digital Services Act (DSA), which requires that any content restriction be proportionate and subject to strict procedural safeguards." In addition to appealing the fine, Cloudflare says it will continue to challenge Piracy Shield in Italian courts, engage with EU officials, and seek full access to AGCOM's Piracy Shield records. Read more of this story at Slashdot. - Google Is Trying To Make 'Vibe Design' Happen With today's latest Stitch updates, Google is trying to make "vibe design" happen, reports The Verge's Jay Peters. The AI-native design platform encourages users to describe goals, feelings, or inspiration in "natural language," rather than starting with traditional blueprints. In a blog post, Google Labs Product Manager Rustin Banks says that Stitch can turn those inputs into interactive prototypes, automatically map user flows, and support real-time iteration. It introduces voice capabilities that allow users to "speak directly to [the] canvas" for feedback or changes. Tools like DESIGN.md also help users create reusable design systems across various projects. Read more of this story at Slashdot. - New Windows 11 Bug Breaks Samsung PCs, Blocking Access To C: Drive Longtime Slashdot reader UnknowingFool writes: Users of Samsung PCs are reporting the inability to access the C: drive after the Windows 11 February update. The bug seems to be in connection with the Samsung Galaxy Connect app, which allows Samsung phones and tablets to connect to Windows machines. [A previous stable version of the app has been re-released to prevent this problem from spreading.] This parody explains the situation with humor. The issue stems from update KB5077181 and is impacting Samsung PCs running Windows 11 25H2 or 24H2. Microsoft and Samsung have confirmed the issue and published a workaround, but as PCWorld notes, it will take some time. The workaround "requires removing the Samsung application, then asking Windows to repair the drive permissions and assigning a new owner, then restoring the Windows default permissions, including patching in some custom code that Microsoft wrote." Read more of this story at Slashdot. - UK Plans To Require Labels On AI-Generated Content An anonymous reader quotes a report from Reuters: Britain plans to consider requiring labels on AI-generated content to protect consumers from disinformation and deepfakes, the government said on Wednesday, as it outlined other areas of focus to tackle the evolving global challenge. Technology minister Liz Kendall stressed the need to strike the right balance between protecting the creative industries and allowing the AI sector to innovate, saying in a statement that the government would take time to "get this right." The next phase of the government's work on copyright and AI would also look at the harms posed by digital replicas without consent, ways for creators to control their work online and support for independent creative organizations, she said. [...] Louise Popple, a copyright expert at law firm Taylor Wessing, noted that the government had not ruled out a broad exception that would allow AI developers to train on copyright works. "That's a subtle difference of approach and could be interpreted to mean that everything is still up for grabs" she said. "It feels very much like the hard issues are being kicked down the road by the government." In 2024, Britain proposed easing copyright rules to let developers train models on lawfully accessed material, with creators able to reserve their rights. On Wednesday, Kendall said that having engaged with creatives, AI firms, industry bodies, unions and academics, the government had concluded it "no longer has a preferred option." "We will help creatives control how their work is used. This sits at the heart of our ambition for creatives – including independent and smaller creative organizations -- to be paid fairly," she said. Read more of this story at Slashdot. - Meta Is Shutting Down VR Social Platform Horizon Worlds Meta is shutting down its VR social platform Horizon Worlds, which was once a key piece of the pivot to the metaverse. The company said the app will be taken off the Quest store at the end of March, and fully removed from Quest headsets by June 15. After that date, it will shift to a standalone "mobile-only experience." CNBC reports: The shift for Horizon Worlds, which was once a central part of the company's push into virtual reality, comes weeks after Meta cut over 1,000 employees from Reality Labs, the unit responsible for the metaverse. [...] The social platform has never drawn more than a couple hundred thousand active users a month, CNBC previously reported. The virtual 3D social network where avatars could interact and play games with other users officially launched in late 2021. It operated exclusively on the Quest VR platform until Meta launched a mobile app version in September 2023. The mobile version of Horizon Worlds was built to provide an entry point for users without VR headsets, functioning similarly to Roblox. Read more of this story at Slashdot. - SaaS Apocalypse Could Be OpenSource's Greatest Opportunity Longtime Slashdot reader internet-redstar writes: Nearly a trillion dollars has been wiped from software stocks in 2026, with hedge funds making billions shorting Salesforce, HubSpot, and Atlassian. At FOSDEM 2026, cURL maintainer Daniel Stenberg shut down his bug bounty program after AI-generated slop overwhelmed his team. A new article on HackerNoon argues that most commercial SaaS could inevitably become OpenSource, not out of ideology but economics. The author points to Proxmox replacing VMware at enterprise scale and startups like Holosign replicating DocuSign at $19/month flat as evidence. The catch, the article claims, is that maintainers who refuse to embrace AI tools risk being forked, or simply replicated from scratch, by those who do. Read more of this story at Slashdot. - 2026 Turing Award Goes To Inventors of Quantum Cryptography Dave Knott shares a report from the New York Times: On Wednesday, the Association for Computing Machinery, the world's largest society of computing professionals, said Drs. Charles Bennett and Gilles Brassard had won this year's Turing Award for their work on quantum cryptography and related technologies. The Turing Award, which was introduced in 1966, is often called the Nobel Prize of computing, and it includes a $1 million prize, which the two scientists will share. [...] The two met in 1979 while swimming in the Atlantic just off the north shore of Puerto Rico. They were taking a break while attending an academic conference in San Juan. Dr. Bennett swam up to Dr. Brassard and suggested they use quantum mechanics to create a bank note that could never be forged. Collaborating between Montreal and New York, they applied Dr. Bennett's idea to subway tokens rather than bank notes. In a research paper published in 1983, they showed that their quantum subway tokens could never be forged, even if someone managed to steal the subway turnstile housing the elaborate hardware needed to read them. This led to quantum cryptography. After describing their new form of encryption in a research paper published in 1984, they demonstrated the technology with a physical experiment five years later. Called BB84, their system used photons -- particles of light -- to create encryption keys used to lock and unlock digital data. Thanks to the laws of quantum mechanics, the behavior of a photon changes if someone looks at it. This means that if anyone tries to steal the keys, he or she will leave a telltale sign of the attempted theft -- a bit like breaking the seal on an aspirin bottle. Read more of this story at Slashdot. - Federal Cyber Experts Called Microsoft's Cloud 'a Pile of Shit', Yet Approved It Anyway ProPublica reports that federal cybersecurity reviewers had serious, yearslong concerns about Microsoft's GCC High cloud offering, yet they approved it anyway because the product was already deeply embedded across government. As one member of the team put it: "The package is a pile of shit." From the report: In late 2024, the federal government's cybersecurity evaluators rendered a troubling verdict on one of Microsoft's biggest cloud computing offerings. The tech giant's "lack of proper detailed security documentation" left reviewers with a "lack of confidence in assessing the system's overall security posture," according to an internal government report reviewed by ProPublica. For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn't vouch for the technology's security. Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant's products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials. The federal government could be further exposed if it couldn't verify the cybersecurity of Microsoft's Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation's most sensitive information. Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government's cybersecurity seal of approval. FedRAMP's ruling -- which included a kind of "buyer beware" notice to any federal agency considering GCC High -- helped Microsoft expand a government business empire worth billions of dollars. "BOOM SHAKA LAKA," Richard Wakeman, one of the company's chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in "The Wolf of Wall Street." It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government's cybersecurity. The program's layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government's secrets. But ProPublica's investigation -- drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors -- found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company's products and practices were central to two of the most damaging cyberattacks ever carried out against the government. Read more of this story at Slashdot. - Apple Can Delist Apps 'With Or Without Cause,' Judge Says In Loss For Musi App An anonymous reader quotes a report from Ars Technica: Musi, a free music streaming app that had tens of millions of iPhone downloads and garnered plenty of controversy over its method of acquiring music, has lost an attempt to get back on Apple's App Store. A federal judge dismissed Musi's lawsuit against Apple with prejudice and sanctioned Musi's lawyers for "mak[ing] up facts to fill the perceived gaps in Musi's case." Musi built a streaming service without striking its own deals with copyright holders. It did so by playing music from YouTube, writing in its 2024 lawsuit against Apple that "the Musi app plays or displays content based on the user's own interactions with YouTube and enhances the user experience via Musi's proprietary technology." Musi's app displayed its own ads but let users remove them for a one-time fee of $5.99. Musi claimed it complied with YouTube's terms, but Apple removed it from the App Store in September 2024. Musi does not offer an Android app. Musi alleged that Apple delisted its app based on "unsubstantiated" intellectual property claims from YouTube and that Apple violated its own Developer Program License Agreement (DPLA) by delisting the app. Musi was handed a resounding defeat yesterday in two rulings from US District Judge Eumi Lee in the Northern District of California. Lee found that Apple can remove apps "with or without cause," as stipulated in the developer agreement. Lee wrote (PDF): "The plain language of the DPLA governs because it is clear and explicit: Apple may 'cease marketing, offering, and allowing download by end-users of the [Musi app] at any time, with or without cause, by providing notice of termination.' Based on this language, Apple had the right to cease offering the Musi app without cause if Apple provided notice to Musi. The complaint alleges, and Musi does not dispute, that Apple gave Musi the required notice. Therefore, Apple's decision to remove the Musi app from the App Store did not breach the DPLA." Read more of this story at Slashdot. - Experiments Show Potatoes Can Survive In Lunar Solar (With Lots of Help) sciencehabit shares a report from Science.org: In The Martian, fictional astronaut Mark Watney survives the wasteland of Mars by growing potatoes in lunar soil -- with a bit of help from human poop. The idea may not be so far-fetched. In a preprint posted this month on bioRxiv, researchers show potatoes can indeed grow in the equivalent of Moon dust, though they need a lot of help from compost found on Earth. To make the discovery, scientists first had to re-create lunar regolith -- the loose, powdery layer that blankets the Moon's surface. To replicate that in the lab, David Handy, a space biologist at Oregon State University (OSU), and his colleagues used a mix of crushed minerals and volcanic ash that matched the chemistry of the Moon. But lunar regolith is entirely devoid of the organic matter that plants need to grow. "Turning an inorganic, inhospitable bucket of glorified sand into something that can support plant growth is complex," says Anna-Lisa Paul, a plant molecular biologist at the University of Florida not involved with the work. So Handy and his colleagues added vermicompost -- organic waste from worms -- into the regolith. They found that a mix with 5% compost allowed the potatoes to grow while still emulating the stressful conditions of the lunar environment. After almost 2 months of growth, the team harvested the tubers, freeze-dried them, and ground them up for further testing. Analysis of the potatoes' DNA showed stress-related genes had been activated. The potatoes also had higher concentrations of copper and zinc than Earth-grown ones, which may make them dangerous for human consumption. The plants' nutritional value, though, was similar to traditional potatoes -- a surprise to the scientists, who expected lower levels of nutrition "because the plants might have been working overtime to overcome certain stressors," Handy says. Read more of this story at Slashdot. - Nvidia Announces Vera Rubin Space-1 Chip System For Orbital AI Data Centers Nvidia unveiled its Vera Rubin Space-1 system for powering AI workloads in orbital data centers. "Space computing, the final frontier, has arrived," said CEO Jensen Huang. "As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated." CNBC reports: In a press release, the company said that its Vera Rubin Space-1 Module, which includes the IGX Thor and Jetson Orin, will be used on space missions led by multiple companies. The chips are specifically "engineered for size-, weight- and power-constrained environments." Partners include Axiom Space, Starcloud and Planet. Huang said Nvidia is working with partners on a new computer for orbital data centers, but there are still engineering hurdles to overcome. "In space, there's no convection, there's just radiation," Huang said during his GTC keynote, "and so we have to figure out how to cool these systems out in space, but we've got lots of great engineers working on it." Read more of this story at Slashdot. - AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop. Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree. But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms. "Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..." "This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice." Read more of this story at Slashdot. - Arizona Charges Kalshi With Illegal Gambling Operation Arizona has filed criminal charges against Kalshi, accusing it of operating an illegal gambling business. "Kalshi may brand itself as a 'prediction market,' but what it's actually doing is running an illegal gambling operation and taking bets on Arizona elections, both of which violate Arizona law," Arizona Attorney General Kris Mayes said in a statement. The case could ultimately head to the Supreme Court to decide whether federal oversight by the Commodity Futures Trading Commission overrides state gambling laws. Bloomberg reports: While state regulators have taken steps to crack down on what they say is unlicensed betting on Kalshi's site, Arizona appears to be the first state to escalate to criminal charges. The charges cited in the complaint are misdemeanors, which carry less serious penalties than felonies. [...] Prediction market exchanges like Kalshi have said they should continue to be regulated by the US Commodity Futures Trading Commission despite opposition from some state officials, who argue the trading should come under state gambling laws. Arizona's criminal complaint follows Kalshi's move last week to block the state's gaming department from taking enforcement action against the company. "These are the first criminal charges of any kind filed against Kalshi in any court in the United States, but it will likely be the first of several," said Daniel Wallach, a sports and gaming attorney. Read more of this story at Slashdot. |
|