Share this
Increase RFEs on H-1B Visa & Green Card: Is AI Being Used in the Visa Review Process?
by Lexi Wu on Apr 28, 2025 6:00:00 AM
With Trump’s return to the White House and his push for sweeping reforms to enhance governmental efficiency, artificial intelligence has been integrated into the daily operations of the U.S. government in an increasingly aggressive manner. Federal policy on AI has drastically shifted—where once there was an emphasis on ethics, transparency, and the protection of civil rights, that regulatory framework has now been swiftly dismantled. In its place is a roadmap for technological expansion under the banner of "national security."
Recent developments within immigration-related agencies like USCIS and the Department of Homeland Security (DHS) suggest that AI is not a future concern—it is already actively shaping immigration processing. According to a March 2025 report from Boundless, “DHS is already using over 200 AI systems, many of which remain undisclosed to the public,” and these systems are being applied in visa screenings, border enforcement, biometric analysis, and even deportation decisions. This is not theoretical or experimental—AI is currently embedded into core decision-making pipelines that directly affect immigrants' lives.
For example, ICE’s “Hurricane Score” and CBP’s use of AI-powered drone surveillance and social media analysis tools such as Fivecast-Ony are already in deployment. Perhaps most significantly, ICE recently signed a $30 million contract with Palantir to develop "ImmigrationOS," an integrated platform designed to automate deportation workflows and minimize human discretion.
Table of Contents
- The DHS and Its AI Black Box
- The Illusion of Transparency: AI Projects Operating Under the Radar
- The Risks of AI in the Justice System
- Conclusion: Algorithmic Governance and Its Consequences
The DHS and Its AI Black Box
Earlier this year, the Trump administration repealed Executive Order 14110—an AI ethics mandate from the Biden era—and replaced it with Executive Order 14179: the "Promoting American Leadership in Artificial Intelligence" directive. The core message of this new order can be summed up in eight words: deregulate the market, strip away oversight.
While this move is framed as a push to boost innovation and win the global AI race, the real issue arises when the federal government holds the keys to a powerful AI arsenal, free from transparency, accountability, or civil rights protections. In this case, technology becomes more than a neutral tool—it becomes both sword and shield for those in power.
According to a new report by the nonprofit Just Futures Law, DHS is currently operating over 200 internal AI programs, most of which have not been publicly disclosed. These systems are being widely deployed across:
- Visa and immigration screening
- Border enforcement
- Deportation decisions
- Biometric identity verification
- Social media behavior prediction
- Drone and facial recognition surveillance
Specific systems currently in use include:
- ICE’s “Hurricane Score” and “Risk Classification Assessment,” which determine whether a migrant should be detained, released, or electronically monitored;
- CBP’s border facial recognition and drone surveillance, social media scanning software such as Fivecast-Ony, and systems like "Project Babel";
- USCIS’s integration of big data cross-checking tools to review all types of immigration applications and assign "behavior scores" to applicants;
- A recent $30 million contract awarded by ICE to Palantir to develop “ImmigrationOS,” an integrated deportation platform designed to embed technological efficiency into removal proceedings, making them faster, colder, and less prone to human discretion.
The Illusion of Transparency: AI Projects Operating Under the Radar
Despite public criticism from civil rights groups, many DHS AI programs have continued operating under new names even after being declared “terminated” in official reports. This has resulted in a highly opaque "black box" oversight system. Legal experts are sounding the alarm: DHS is restructuring the entire U.S. immigration enforcement model under the guise of AI, bypassing proper audits and external accountability.
Traditionally, USCIS has served as a benefits-processing agency, while enforcement responsibilities fell to ICE and CBP. But the recent spike in RFEs demanding detailed identity and behavior data raises serious questions: Is USCIS now acting as a data-gathering proxy for immigration enforcement?
Following multiple executive orders under Trump that encourage the integration of AI and enforcement, data sharing and risk-profiling algorithms between USCIS and DHS are reportedly becoming more intertwined.
The Risks of AI in the Justice System
A 2024 report from the Department of Justice (DOJ) acknowledges that while AI can improve efficiency in areas like crime prediction, risk assessment, and forensic analysis, it also introduces grave risks of systemic bias, discriminatory enforcement, and erosion of civil liberties. The report identifies four high-risk areas:
- Facial recognition and remote surveillance
- Automated forensic analysis systems
- Predictive policing
- Recidivism risk scoring models
The problem is not AI technology itself, but who uses it, and for what purposes. In a highly politicized federal environment, these tools are ripe for misuse: suppressing dissent, enabling selective enforcement, or manipulating judicial outcomes.
AI systems are often trained on historical enforcement data—data that may already be tainted by racial bias, regional disparities, and vague legal definitions. These issues can be embedded in the algorithms and amplified through deployment. Without community input, human oversight, or judicial review, this leads to what experts are calling "technologically enabled authoritarianism."
Conclusion: Algorithmic Governance and Its Consequences
The question “Is AI already being used in immigration?” is no longer hypothetical. The answer is yes, and in ways that are rapidly expanding and increasingly opaque. As Boundless notes, “many AI programs continue to operate under renamed identities, even after public criticism,” highlighting the extent to which DHS has created a shadow ecosystem of algorithmic surveillance and scoring with little public oversight.
This shift represents a fundamental change in how immigration is managed. Where officers once made decisions based on direct documentation and human review, algorithms now assist—or in some cases, dominate—those decisions behind the scenes. This change undermines transparency, weakens accountability, and raises critical questions about fairness and due process.
As AI becomes the invisible engine behind key decisions in immigration and enforcement, the lack of visibility, appeal mechanisms, or explanation puts every immigrant’s future in the hands of a system they cannot see or challenge. It is not simply a policy change—it’s the construction of a new architecture of control, powered by data, and shielded from scrutiny.
If you are facing struggles and have a lot of questions, you can discuss them with peers on the GoElite Forum!
Contact us now! We will help you map out the suitable school for you!
Share this
- Day 1 CPT (68)
- Day 1 CPT Universities (54)
- H1B (46)
- H1B Lottery (18)
- immigration news (18)
- H-1B News (15)
- USCIS (15)
- CPT (12)
- Day 1 CPT University (12)
- H1B Visa (11)
- OPT (9)
- F-1 H-1B Visa (8)
- change of status (8)
- Day 1 CPT Universities News (7)
- Green card (6)
- International Students (6)
- Insider (5)
- day 1 cpt MBA (5)
- Day 1 CPT Policies (4)
- F1 (4)
- Immigration (4)
- Jobs (4)
- RFE (4)
- sevis (4)
- H4 Visa (3)
- dba (3)
- international student (3)
- Career Guide (2)
- H1B Status (2)
- L1 (2)
- Layoff (2)
- Sofia University (2)
- Visa (2)
- Westcliff University (2)
- CPT Employer (1)
- F1 Status (1)
- Green Card Application (1)
- H1B Layoff (1)
- ICC (1)
- International Student Travel (1)
- NEC (1)
- O1 (1)
- OPT Guide (1)
- STEM OPT (1)
- Scholarship (1)
- Scholarships (1)
- change of status timeline (1)
- concurrent (1)
- day 1 cpt job (1)
- day 1 cpt risks (1)
No Comments Yet
Let us know what you think