In a recent development, Axios has reported that the use of ChatGPT and similar generative AI tools in congressional offices is subject to strict limits imposed by Congress. According to a memo obtained by Axios, Catherine Szpindor, the administrative chief of the House of Representatives, has outlined specific conditions for the use of large language AI models in congressional offices.
Under these guidelines, congressional staff are permitted to use only the paid ChatGPT Plus service, which offers enhanced privacy controls. However, usage is limited to “research and evaluation” purposes and cannot be integrated into staff members’ day-to-day work.
Furthermore, even with the ChatGPT Plus service, House offices are restricted to using the chatbot with publicly accessible data. Privacy features must be manually enabled to prevent interactions from providing data to the AI model. As a result, the free tier of ChatGPT, as well as other large language models, are currently prohibited.
Axios has reached out to the House for comment on this matter, and further information will be provided if available. It is worth noting that such usage restrictions align with concerns raised by various institutions and companies regarding the potential risks and misuse of generative AI. Instances of AI-generated attack ads by Republicans and purported leaks of sensitive data by Samsung staff while using ChatGPT have drawn criticism. Schools have also banned the use of these systems due to concerns related to cheating.
In an effort to regulate and govern AI, both sides of Congress have been actively involved. Representative Ritchie Torries introduced a bill in the House that seeks to mandate disclaimers for the use of generative AI, while Representative Yvette Clark aims to require similar disclosures for political ads. Senators have conducted hearings on AI and put forward legislation to hold AI developers accountable for harmful content produced using their platforms.
Given the current climate surrounding AI regulation, the House’s restrictions on the use of generative AI tools appear to align with broader efforts to mitigate potential risks and ensure responsible utilization of these technologies.