AI wrap: Education push, data privacy rollbacks, Meta’s fraud crisis

Key developments in US AI policy and practice.

The federal government has launched a sweeping initiative to integrate AI education into schools and workforce development programs, even as it retreats from key data privacy protections once designed to curb commercial surveillance.

At the same time, Meta’s platforms are under growing scrutiny for facilitating a wave of scams run by transnational criminal networks, and California is facing public backlash over its use of AI-generated content in legal examinations.

These developments reveal a fragmented landscape of AI oversight, where rapid innovation, systemic vulnerabilities, and lagging policy responses often collide. Amid these tensions, a recent discussion at Stanford’s Data on Purpose 2025 conference underscored a deeper dilemma – the challenge ahead is not only how to build more intelligent systems, but how to align them with human goals.

Federal push to embed AI education

In a sweeping new executive order, the White House has launched a federal initiative to future-proof America’s youth by embedding artificial intelligence education across the K-12 system and beyond. Framed as a national competitiveness imperative, the policy aims to democratize access to AI skills by integrating foundational literacy into early education while upskilling educators through targeted grants and partnerships. 

At the heart of the strategy lies a new AI Education Task Force comprising cabinet-level officials and led by the Office of Science and Technology Policy, charged with coordinating public-private partnerships, launching a nationwide AI Challenge to stimulate innovation, and accelerating the creation of AI-centric curriculum and teacher training materials. 

By elevating AI education to a matter of federal policy, the administration seeks to cultivate not just digital literacy, but a pipeline of homegrown AI talent that can compete globally in both commercial and scientific domains.

But this isn’t merely a STEM refresh for classrooms, it’s a structural recalibration of the nation’s human capital model. The order calls for a proliferation of registered apprenticeships in AI, industry-recognized certification programs for high schoolers, and the repurposing of workforce development funds toward AI upskilling across life stages. 

Agencies like the Department of Education, National Science Foundation, and Department of Labor are now tasked with reengineering grantmaking, workforce incentives, and training infrastructure to treat AI not just as a subject, but as a lens through which all learning and labor readiness are shaped. 

Whether this policy moment produces a truly inclusive AI economy or simply extends existing digital divides will depend less on the executive branch’s ambition and more on its ability to deliver implementation across fragmented systems.

In higher education, efforts like American University’s new AI Institute illustrate how this federal vision is beginning to take root beyond K–12.

California’s bar exam faces backlash

The California State Bar has admitted that a portion of the multiple-choice questions on its troubled February 2025 bar exam were generated with the help of artificial intelligence: a revelation that has fueled intense criticism from legal educators and test-takers alike. 

Of the 171 scored multiple-choice questions, 23 were developed using AI by a consulting firm not staffed with lawyers, prompting concerns about both the content’s validity and the broader integrity of the exam. Technical issues had already marred the test, with widespread reports of system crashes, lagging screens, and incomplete submissions.

While the Bar maintains confidence in the fairness of the AI-assisted questions, critics argue the lack of subject-matter oversight and the conflict of interest in question development and approval are deeply troubling. 

Legal scholars warn that deploying AI in such high-stakes settings without clear standards or expert vetting undermines public trust. Yet others, like Suffolk Law Dean Andrew Perlman, suggest that AI’s role in legal assessment is likely to expand, and that future concerns may shift from overuse to the competence of professionals who fail to integrate AI at all.

These concerns echo sentiments raised at the SEC’s inaugural AI roundtable, held in Washington, DC, where financial leaders and academics debated the delicate balance between leveraging AI and managing its risks. 

Like the California bar’s AI content, many financial firms now face scrutiny over how much responsibility they delegate to machines. Key takeaways from the roundtable included calls for firm-level governance structures, caution over “black box” models, and the necessity of keeping skilled humans “in the loop.”

Withdrawal of Data Broker Rule

In a quiet but consequential reversal, the Trump administration has scrapped a proposed Consumer Financial Protection Bureau (CFPB) rule that would have brought data brokers under the same regulatory umbrella as credit bureaus, requiring them to obtain Americans’ consent before selling sensitive personal information. 

The move halts an effort to close long-standing loopholes in the Fair Credit Reporting Act and was announced via a low-profile Federal Register notice, just days after intense lobbying from fintech industry groups. 

Acting CFPB Director Russell Vought claimed the rule no longer aligns with the Bureau’s interpretation of the law, drawing fierce criticism from privacy advocates, national security experts, and consumer groups who see the decision as a retreat in the face of corporate pressure.

The rollback leaves intact a sprawling, opaque data brokerage industry that traffics in everything from Social Security numbers to geolocation data, often with little oversight or consumer awareness. Critics warn this deregulation endangers not only individual privacy but national security, with mounting evidence that foreign actors have accessed commercially available datasets to track US military personnel and government contractors. 

The CFPB’s decision comes amid widespread federal downsizing and leadership changes, prompting concern that consumer protections may be deprioritized in favor of industry-focused reforms. For now, Americans remain exposed to the unchecked monetization of their most intimate data: a vulnerability both profitable and perilous.

Global fraud crisis

Meta’s sprawling digital empire, home to Facebook, Instagram, and Marketplace, is rapidly becoming a global hub for online fraud, with regulators and banks flagging its platforms as prime territory for scammers. 

Small businesses are bearing the brunt, as counterfeit ads featuring their names and photos deceive consumers into sending payments for non-existent goods. Despite years of internal warnings and public scrutiny, Meta continues to tolerate prolific ad fraud from overseas networks, in part because its $160 billion advertising business thrives on high ad volume. 

Internal documents reveal that advertisers can rack up dozens of fraud-related “strikes” before facing removal, even in cases involving cross-border financial deception.

The company’s permissiveness has allowed Southeast Asian crime syndicates to weaponize the platform’s reach, turning Meta into a vector for sophisticated fraud schemes, including those powered by crypto and generative AI. 

Marketplace, Meta’s peer-to-peer sales hub, has become a particularly fertile ground for scams. Yet despite the documented harms, from ruined small businesses to victims of international pig-butchering scam operations, Meta deflects liability in court, arguing it has no legal obligation to prevent fraud on its platforms. 

Its legal defense aside, Meta’s regulatory posture is under renewed scrutiny, especially as lawmakers and financial regulators debate tightening tech accountability, including efforts like those led by the House Financial Services Committee to reassess rules seen as enabling consumer exploitation.

The rise in Meta-enabled scams mirrors broader trends flagged by the CFTC, which recently warned that generative AI is accelerating a new class of sophisticated, tech-driven financial fraud.

AI’s real impact

At a recent session of Stanford’s Data on Purpose 2025: Reimagining the Digital Future conference, participants examined the widening gap between AI’s promised transformation and its current trajectory. 

While acknowledging the technology’s potential, discussion centered on the concern that industry priorities, particularly the pursuit of artificial general intelligence, are delivering incremental automation rather than broad-based productivity gains.

Macroeconomic models suggest that the current path may yield less than a 1% productivity increase over a decade, modest by historical standards and far below the expectations often echoed in tech circles. Rather than simply automating routine tasks, true economic transformation was said to arise from innovations that unlock new capabilities and markets, an area where AI has yet to fully deliver.

The conversation emphasized the importance of redirecting AI development toward tools that augment human capability. Historical parallels, such as the crane or the early Internet, highlighted how technology’s greatest impacts come not from replacing labor, but from amplifying what people can do. 

In this light, AI’s potential lies in empowering workers across sectors to act with near-expert competence through better information access and synthesis. Yet, much of today’s development lacks alignment with actual user needs, leading to fragmented, surface-level adoption rather than deep integration.

There was also concern that corporate pressures are driving premature AI implementation without systemic workflow redesign. Without this foundation, productivity improvements are likely to remain shallow. 

Beyond the economic implications, the conversation turned to the social and political consequences of AI adoption, particularly the risks of exclusion and disruption if its deployment is left solely to private actors. To address this, examples were shared of AI tools being developed not to displace, but to support professionals, such as educators, in ways that personalize services for underserved communities. 

The underlying argument? Real progress will depend not only on innovation, but on institutional reform and democratic involvement to ensure equitable outcomes.