AI generated summary for some of NTIA RFC public submissions

Below extracts have been generated by a program using OpenAI GPT-4 APIs.

NTIA-2023-0005-1233

online comments

BCBSA comments attached

attached document

- The Blue Cross Blue Shield Association (BCBSA) has submitted comments on the Artificial Intelligence (AI) Accountability Policy Request for Comment (RFC) by the National Telecommunications and Information Administration (NTIA).
- BCBSA supports the continued research and development of best practices and standards for AI, including algorithm documentation, testing, use, and auditing.
- The association believes that consumer trust in AI can be achieved through self-regulatory, regulatory, and other measures.
- BCBSA is leveraging technology to provide innovative solutions and services to its members and improve patient-focused care programs.
- BCBSA has highlighted the importance of aligning with the NIST AI Risk Management Framework and regulatory alignment on AI.
- The association believes that audits and assessments could hinder the development of trustworthy AI and make American industries less competitive if not approached responsibly.
- BCBSA supports NTIA’s alignment with the NIST AI RMF and White Paper, which recommends risk management across the AI lifecycle and a socio-technical approach.
- BCBSA agrees that controls should be put in place to mitigate adverse bias throughout the AI lifecycle.
- Policymakers must consider HIPAA’s and the HITECH Act’s restrictions on information flow when considering third-party access to inputs for AI audits or assessments.
- The U.S. government should fund continued research into AI systems and strategies to mitigate unintended consequences, including bias, to improve data equity.

NTIA-2023-0005-1144

online comments

See attached file(s)

attached document

- ARVA is a non-profit organization focused on managing vulnerabilities in AI systems.
- The AI Vulnerability Database (AVID) is ARVA's main project, combining a taxonomy of AI risks with a knowledge base of AI system flaws.
- ARVA advocates for cybersecurity practices to be used as models for AI resources development.
- AVID is being developed as a community-driven, open-source repository for AI system vulnerabilities.
- ARVA stresses the need for a robust process for adjudicating AI vulnerabilities.
- The organization supports the creation of standardized reporting formats and centralized repositories for AI vulnerabilities.
- ARVA advocates for standardized public disclosure of vulnerabilities and audit results across sectors.
- The organization sees the need for an adjudicating body to translate technical reports or audits into meaningful information for affected communities.
- ARVA aims to co-create resources with various communities to make AI less harmful.
- The concept of ethical disclosure should be used where there is evidence that the reports will be taken seriously and the problem mitigated, according to ARVA.

NTIA-2023-0005-1108

online comments

See attached file(s)

attached document

- Johnson & Johnson has responded to the National Telecommunications and Information Administration's request for comment on AI accountability policy.
- The company commends the engagement of stakeholders on issues of AI accountability and trust.
- J&J is using AI for drug discovery, surgical instrument optimization, and rare adverse event identification.
- The company supports an ethical, compliant, and secure approach to AI, emphasizing public trust.
- J&J proposes a policy framework that encourages innovation, data culture, social cohesion, ethical behavior, and international cooperation.
- The company supports the Business Roundtable AI Roadmap for Responsible AI and its recommendations.
- J&J promotes a holistic approach to AI, including principles, best practices, and voluntary industry standards.
- The company emphasizes the importance of privacy, data protection, addressing bias, and enhancing AI explicability.
- J&J supports international cooperation on ethical guidelines for a consistent global approach.
- The company recognizes the potential of AI in healthcare and is committed to increasing stakeholder engagement and collective action.

NTIA-2023-0005-1096

online comments

Please see attached file

attached document

1. AI technologies are increasingly being incorporated into consumer technology without adequate consideration of potential risks and impacts.
2. Most AI systems are based on algorithms and data controlled by large tech companies, some of which have disbanded their ethics teams or have executives who resist criticism of their work.
3. There is a need for stricter guidelines on AI technology capabilities and use, as well as protections for data collection and model training.
4. Companies using AI technologies should adhere to principles of transparency, accountability, and ethical participation, including disclosing training data, language model sources, and product capabilities.
5. Transparency should encompass a comprehensive report of data sources used in model training, particularly for companies that publish APIs for software developers.
6. Accountability should include well-defined rules for potential harm caused by AI technology, assessed by a diverse group of technology builders, social scientists, academics, and marginalized communities.
7. Companies that build large language models should permit third-party audits of their data sources, storage, and use.
8. Ethical participation should involve a clear explanation of the company's approach to AI system use, including transparency, accountability for system misuse, and compliance with information requests about system creation.
9. The NTIA should take into account issues of technological antitrust, individual copyright agency, and human rights protections in their AI accountability requirements.
10. The current system favors large corporations, and regulation could further restrict competition from smaller agencies, open source communities, and individuals.

NTIA-2023-0005-0108

online comments

Please find attached, I am giving some explanation/examples on my thinking, the reason I propose these.

attached document

1. The user believes AI should not be used in certain areas of human society, such as government, legislature, court, and judicial processes due to potential negative impacts on marginalized people and democracy.
2. The use of AI in other areas, such as mental health products, is open for debate.
3. The National Telecommunications and Information Administration (NTIA) has made comments on the topic of transparency in AI contents.
4. The advancement of generative AI has led to a significant increase in AI-generated content, which could make content moderation on social media platforms more challenging.
5. Policymakers should enforce a labeling requirement for AI-generated content, similar to other product labeling laws.
6. AI algorithms are often designed to persuade users to make certain decisions, which can sometimes involve providing favorable but less truthful information.
7. The government should provide high-level guidance on the social and economic impact of policies, with third-party auditors responsible for defining detailed criteria.
8. The auditors for AI should be independent and not affiliated with AI vendors, similar to private funded financial audits.
9. Vendors are encouraged to self-examine and disclose potential harms with auditors, even if the information is proprietary.
10. In the initial phase, AI vendors can self-test to confirm compliance with government guidelines, without the involvement of third-party auditors.

NTIA-2023-0005-1200

online comments

See attached file.

attached document

- The Investor Alliance for Human Rights has provided input to the National Telecommunications and Information Administration on AI accountability.
- The Alliance is a platform for responsible investment that respects human rights using international frameworks.
- It helps investors address human rights risks in their investment activities and advocates for responsible business policies.
- The rapid development of AI systems without human right due diligence processes has led to adverse human rights impacts.
- The Alliance supports global regulations that incentivize and enable responsible development and use of AI.
- It believes that companies should commit to respecting human rights, conduct continuous human rights due diligence, and provide effective remedy for any adverse impacts.
- The Alliance proposes mandatory human rights due diligence requirements for developing and deploying AI systems.
- It also proposes prohibitions on AI models and systems posing high risks of violating human rights, and safeguards for AI systems used for law enforcement or national security.
- The Alliance emphasizes the need for remedy and accountability, including the right to an effective remedy for those whose rights have been infringed by an AI system.
- It concludes that investors are important stakeholders in enabling trustworthy AI and looks forward to participating in ongoing discussions on this issue.

NTIA-2023-0005-1238

online comments

See attachment

attached document

- Patient care systems in America and globally have been found to be unsafe, ineffective, or biased.
- The Prescription Drug Monitoring Program (PDMP) by Bamboo Health has been used as an unregulated law enforcement tool, leading to unconstitutional surveillance and denial of medical care.
- PDMP algorithms should be regulated by the FDA as a clinical decision-making tool, not a tool for law enforcement.
- The Center for U.S. Policy has petitioned for Bamboo Health's NarxCare algorithms to be classified as a medical device subject to FDA regulation.
- The FDA has a moral and ethical duty to regulate the PDMP as a clinical decision-making tool.
- PDMPs can exacerbate discrimination against patients with complex and stigmatized medical conditions.
- The FDA is authorized to regulate PDMP predictive diagnostic software platforms as medical devices but has failed to do so.
- Surveillance algorithms are being used to sell "risk scores" to doctors, insurers, and hospitals without patient consent and with little regulation.
- The DEA's suspension of a California doctor’s license to prescribe opioids led to multiple preventable patient deaths.
- There is a need for public transparency and FDA federal government oversight mechanisms to prevent further public harm.

NTIA-2023-0005-1242

online comments

Please see attached file.

attached document

- Accenture, a global professional services company, has submitted a comment on AI Accountability Policy to the National Telecommunications and Information Administration (NTIA).
- Accenture emphasizes the importance of a well-designed AI governance framework for any agency developing or deploying AI.
- The company's response is divided into three categories: Human + Machines, Baseline Accountability, and Feedback Tools.
- In the Human + Machines category, Accenture highlights the importance of human participation in AI systems for improving the user experience and decision-making processes.
- In the Baseline Accountability category, Accenture suggests that accountability measures should be based on existing human processes and included in the design stage.
- In the Feedback Tools category, Accenture suggests using technical tools like guardrails, templates, and model self-validation during the design, development, and deployment stages to refine models and establish an automated accountability process.
- Accenture supports NTIA's efforts to develop AI system accountability measures and policies and is open to further dialogue on the topic.
- The company recommends creating baseline measures based on current human performance to achieve accountability and trust outcomes.
- Accenture outlines the need for accountability mechanisms to exist in various facets, stakeholders, and phases to scale AI with confidence.
- The government should act as a convener for academia, public and private sectors to collaborate on best documentation practices.

NTIA-2023-0005-1368

online comments

- WITNESS is an international human rights organization that uses video and technology to protect human rights.
- The organization's Technology Threats and Opportunities Team works with emerging technologies that could affect society's trust in audiovisual content.
- WITNESS has conducted extensive research and advocacy on synthetic media and is preparing for the impact of AI on truth discernment.
- The organization has consulted with human rights defenders, journalists, content creators, fact-checkers, and technologists across four continents to identify concerns about deepfakes, synthetic media, and generative AI.
- WITNESS has developed guidelines for principled action and recommendations for policy makers, technology companies, regulators, and other stakeholders.
- The submission focuses on how AI accountability mechanisms can inform people about the operation of such tools and their compliance with trustworthy AI standards.
- The document is based on WITNESS' 30 years of experience in helping communities create trustworthy photo and video for human rights advocacy, protect against content misuse, and challenge misinformation targeting at-risk groups and individuals.

attached document

- WITNESS, a human rights organization, submitted a request for comment on AI Accountability Policy to the National Telecommunications and Information Administration.
- The organization helps people use video and technology to protect their rights and has conducted research on synthetic media and AI's impact on truth discernment.
- WITNESS has developed guidelines for principled action and recommendations for policy makers, technology companies, regulators, and other stakeholders.
- The submission focuses on how AI accountability mechanisms can inform people about how such tools are operating and/or whether the tools comply with standards for trustworthy AI.
- AI accountability mechanisms should prioritize the protection of human rights and democracy, with a focus on those most impacted by synthetic media and generative AI.
- Responsibility for AI accountability should be shared among all stakeholders in the AI, technology, and information pipeline.
- AI accountability mechanisms should be designed and deployed with human rights standards, laws, and practices embedded within them.
- Technologies like Proofmode and eyeWitness, developed by the human rights sector, track the provenance of media and prove its integrity.
- Visible indicators of synthetic media, such as watermarks or labels, should be designed and implemented with human rights and accessibility concerns in mind.
- Accountability mechanisms should mandate that companies developing AI systems undertake comprehensive human rights assessments prior to deploying AI models and tools.

NTIA-2023-0005-1426

online comments

- Hugging Face appreciates the work of the National Telecommunications and Information Administration (NTIA) Task Force on AI accountability.
- The company's comments are based on their experience as an open platform for state-of-the-art AI systems.
- Hugging Face is a community-oriented company based in the U.S. and France, aiming to democratize Machine Learning (ML).
- It is the most widely used platform for sharing and collaborating on ML systems.
- The company hosts machine learning models and datasets, supports AI research, and provides educational resources to make AI accessible to all.
- Hugging Face recommends focusing on transparency and data and algorithm subject rights for AI accountability.
- The company believes this approach will involve more stakeholders and bring more expertise and resources to ensure AI systems benefit users.

attached document

- Hugging Face is a U.S. and France-based company focused on democratizing Machine Learning (ML) and operates as an open-source and open-science platform.
- AI accountability mechanisms such as certifications, audits, and assessments ensure the safety, security, and well-being of users as technology becomes more integrated into society.
- The shift from academic research to commercial product development has created an accountability gap, particularly with respect to the technology’s social impact.
- AI technology has a significant social impact due to its rapid development and adoption, and it's important to navigate complex value tensions among different stakeholders.
- The AI value chain is complex and involves multiple stages, including model development, data training, and deployment, and accountability efforts should be distributed across the value chain.
- The accountability process should address data quality and data voids by curating appropriate test sets during the development process of AI systems.
- AI accountability "products" should be communicated to different stakeholders through a centralized repository of audits to build stronger methodology and avoid duplication of work.
- The most significant barriers to effective AI accountability in the private sector include lack of information about systems, lack of legal protections for AI impacts, complexity of impact evaluations, and the unpredictable nature of evolving AI harms.
- Comprehensive privacy legislation, such as the EU’s General Data Protection Regulation (GDPR), is an important tool to govern the use of personal data by AI systems.
- AI audits and assessments incur different categories of costs, including technical expertise, legal and standards expertise, deployment and social context expertise, data creation and annotation, and computational resources.

NTIA-2023-0005-1349

online comments

See attached file(s)

attached document

1. The U.S. Department of Commerce's National Telecommunications and Information Administration (NTIA) has issued a request for public comment on AI Accountability Policy.
2. The Center for Responsible AI at New York University (NYU R/AI) was established in 2020 to advance responsible AI through research, dialogue, and technical collaborations.
3. NYU R/AI has worked on public education initiatives, including a course called "We are AI: Taking control of technology," and two comic book series.
4. The concept of contextual transparency, which integrates social science, engineering, and information design to improve Algorithmic Decision Systems (ADS) transparency, was introduced by Sloane and Stoyanovich.
5. The authors suggest that AI accountability mechanisms should consider the AI lifecycle, and technical accountability primitives should be integrated directly into the data lifecycle management.
6. The study shows that it is often possible to reproduce results of peer-reviewed social science papers over privacy-preserving synthetic versions of public datasets.
7. The NYU R/AI team has been developing educational materials and methodologies, teaching responsible AI to various groups including the public, data scientists, government accountability professionals, librarians, and practitioners at several large commercial entities.
8. The lack of a general federal data protection or privacy law is a barrier to effective AI accountability.
9. The federal government should fund responsible AI education initiatives and support small and medium-sized businesses in their transition into responsible AI.
10. Julia Stoyanovich, Arnaud Sahuguet, and Gerhard Weikum have worked on a platform for responsible data science named Fides.

NTIA-2023-0005-1339

online comments

Attached please find the response of The Institute for Workplace Equality to NTIA's AI Accountability Policy Request for Comment.

attached document

- The Institute for Workplace Equality has responded to the AI Accountability Policy Request for Comment by the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA).
- The Institute aims to educate NTIA on the existing laws and regulations that protect individuals in the workplace from AI-related harms.
- The Institute argues against additional laws and regulations governing the application of AI in employment.
- An Artificial Intelligence Technical Advisory Committee (AI TAC) has been created by the Institute to draft a report on best practices for employers using AI in human resources.
- The Institute believes that employee protection is a matter of following existing law and guidance from workforce regulating agencies like the EEOC and OFCCP.
- The Institute suggests that Federal agencies should clarify that AI applications are governed by existing anti-discrimination laws and protections.
- Companies like Google, Microsoft, and IBM recommend updating existing oversight and enforcement regimes to apply to AI systems.
- The EEOC released a technical assistance document in May 2022 explaining how the ADA applies to AI-enabled hiring tools.
- The EEOC also released guidance in May 2023 explaining that employers can be held liable for using AI tools in hiring and promotions that cause a disparate impact on individuals based on race, sex, and other protected categories.
- The Institute for Workplace Equality recommends that any new AI-related requirements should recognize that the existing comprehensive statutory scheme is sufficient to address unlawful discrimination in the workplace that is AI-related.

NTIA-2023-0005-1381

online comments

- DLA Piper's global AI and Data Analytics Practice (DLAI) has responded to NTIA's AI Accountability Policy Request for Comment.
- The response is aimed at informing policies related to the development of AI audits, assessments, certifications, and other trust-earning mechanisms in AI systems.
- DLAI's response includes 26 pages of targeted inputs addressing key questions related to AI accountability.
- The response covers AI accountability issues throughout the entire AI lifecycle.

attached document

- DLA Piper’s global AI and Data Analytics Practice (DLAI) has responded to NTIA’s AI Accountability Policy Request for Comment to help inform policies related to the development of AI audits, assessments, certifications and other mechanisms to earn trust in AI systems.
- DLAI represents AI makers and adopters, tests AI for trust, safety, and compliance, and builds AI-enabled tools to help respond to legal and compliance challenges.
- The main purposes of AI accountability mechanisms like certifications, audits, and assessments include verifying compliance with ethics, safety and fairness standards, building public trust and confidence, identifying risks or issues proactively, comparing to industry best practices, fulfilling legal and regulatory requirements, managing organizational risk, informing training and development, providing transparency, and preventing misuse.
- AI accountability mechanisms should cover topics like data practices, model transparency, testing rigor, performance benchmarks, risk assessments, monitoring plans, human oversight, controls & safeguards, handling failures, transparency, and training for users.
- AI audits and assessments should be integrated into broader accountability mechanisms and review processes focused on goals like human rights, privacy, security, diversity and inclusion for efficiency, holistic oversight, centralized expertise, clearer incentives, consistent standards, and shared lessons.
- AI accountability practices can have a meaningful impact even without legal standards enforcement, fostering a responsible culture and building public understanding and trust.
- AI accountability mechanisms like certifications, audits, and assessments promote trust and confidence externally and influence positive changes internally.
- AI accountability measures can effectively deal with systemic and collective risks of harm, but require understanding of potential paths for bias and need to be conducted at several stages.
- Developers and deployers of AI systems should maintain and make available documentation for each phase of Ideation, Development, and Deployment.
- The lack of a general federal data protection or privacy law poses barriers to effective AI accountability, including a fragmented compliance landscape, uncertainty disincentivizing openness, difficulty implementing strong privacy controls, public distrust and scrutiny, varying standards, FOIA exposure, and weaker incentives for caution.

NTIA-2023-0005-1391

online comments

See attached file(s)

attached document

1. The submission is for a request for comment on AI Accountability Policy, aiming to gather diverse perspectives on AI accountability.
2. The National Telecommunications and Information Administration issued a notice on 04/13/2023, authored by Kassandra Popper, a software developer.
3. AI accountability mechanisms should ensure AI products function as advertised and according to industry norms, with audits for quality assurance and investigation of product issues.
4. AI accountability practices can provide a competitive advantage for AI industry participants, even without legal standards.
5. AI accountability goals such as not contributing to harmful discrimination, being safe and legal, should be treated within the context of the AI system's application domain.
6. Over-regulation of the AI industry could impede progress in AI R&D and delay the development of more trustworthy AI systems.
7. AI Accountability should focus on two parts of the value chain: data collection for training a machine learning system and the distribution of the AI system to the customer or other affected person.
8. AI Accountability measures should be mandatory for applications where there is a significant risk of serious or permanent harm to people or property.
9. The absence of a federal law specifically for AI systems is not a hindrance to effective AI accountability, as AI systems can be regulated at various governmental levels.
10. AI accountability policies should be sector-specific due to unique application requirements in each sector, with regulation focusing on inputs to validation, especially data needed to confirm a product's functionality and fitness for purpose.

NTIA-2023-0005-1314

online comments

See attached file(s)

attached document

1. The Integrity Institute has responded to the NTIA's "AI Accountability Policy Request for Comment," emphasizing the importance of understanding the real-life implications of irresponsibly deploying AI systems.
2. The Institute's co-founders, Sahar Massachi and Jeff Allen, express their desire to remain engaged in future efforts in this area.
3. The Integrity Institute is a think tank composed of tech professionals experienced in integrity roles within social internet platforms.
4. The Institute aims to share its expertise with those involved in the creation and governance of the social internet, focusing on AI developments, particularly generative AI.
5. The Institute advocates for the early incorporation of safety, fairness, and transparency in AI development.
6. AI accountability mechanisms such as certifications, audits, and assessments should contribute to transparency and explainability of the AI system and its use.
7. AI accountability mechanisms should demonstrate that the producers of AI systems understand how their systems are working and have fully studied the system design and outcomes.
8. AI accountability practices may not have meaningful impact without legal standards and enforceable risk thresholds.
9. Developers should be legally required to notify any people who use their products that AI systems were used in the production of those products.
10. The current AI development context is characterized by a race to market among industry players, leading to a system of "post-hoc safety" where users provide testing in production.

NTIA-2023-0005-1283

online comments

Please find CIPL's comments attached.

attached document

- CIPL supports federal data privacy legislation in the United States and has responded to the NTIA request for comments on AI system accountability measures and policies.
- CIPL's Accountability Framework includes leadership and oversight, risk assessment, policies and procedures, transparency, training and awareness, monitoring and verification, and response and enforcement.
- AI accountability mechanisms should cover the entire life cycle from design and development to application and use, according to CIPL.
- AI audits or assessments should be integrated into other accountability mechanisms that focus on human rights, privacy protection, security, and diversity.
- AI technologists confirm that AI systems must be tested by reference to potentially sensitive categories of data, such as gender, race, and health, to avoid bias.
- CIPL recommends that NTIA highlight the importance of innovation and robust competition, and balance the disclosure of AI algorithms and decision-making processes against commercial IP rights and business interests.
- AI Accountability frameworks should address risks of harm to individuals and systemic and collective risks of harm.
- The US needs a comprehensive, risk-based federal privacy law to create baseline protections and consistency across industry and sectors.
- AI developers and deployers should be required to conduct context-based risk assessments to support external audits and reduce costs.
- The CIPL Accountability Framework outlines examples of accountable AI activities undertaken by various organizations, including public commitment to respect ethics, values, and principles in AI development, deployment, and use.

NTIA-2023-0005-1281

online comments

Please see the attached comments from Lumeris.

attached document

- Lumeris is a leading provider of technology and insurance capabilities in the healthcare industry.
- The company has responded to the National Telecommunications and Information Administration’s (NTIA) AI Accountability Policy Request for Comment.
- Lumeris supports value-based care models and believes in placing patients at the center of care and decision-making.
- The company uses technology solutions to improve health outcomes and optimize provider practices.
- Lumeris' technology suite includes LumerisRealize, LumerisEngage, and LumerisProtect.
- Lumeris believes AI accountability mechanisms should focus on the systems and decisions driven by AI and should ensure accuracy and compliance with regulations and ethical considerations.
- The company emphasizes the importance of AI systems not contributing to harmful discrimination or misinformation.
- Lumeris argues against legislation requiring human alternatives to AI systems, stating that it could result in worse outcomes for patients.
- The company believes that AI accountability policies should reflect the unique challenges and risks of individual sectors.
- Lumeris advocates for uniform AI requirements across various sectors in the United States to avoid conflicting state laws.

NTIA-2023-0005-1447

online comments

See Attached

attached document

- Accountability mechanisms can potentially hinder the development of trustworthy AI and impact AI innovation and competitiveness, particularly for smaller developers and startups.
- Strict regulations, such as GDPR, can disproportionately burden small startups and independent developers, impeding their progress and financial stability.
- Stringent accountability measures can deter risk-taking, crucial for innovation, and divert resources away from product development and technological advancement.
- A well-designed regulatory framework should balance the need for accountability with fostering a dynamic and innovative AI sector.
- Accountability in the AI value chain should be end-to-end, with special attention to points of human interaction.
- Blockchain technology could be used to coordinate and communicate accountability efforts across the AI value chain.
- Developers and deployers of AI should keep essential records to support accountability while prioritizing user privacy.
- Significant barriers to effective AI accountability in the private sector include a lack of standardization and insufficient incentives.
- The absence of a general federal data protection or privacy law is a barrier to effective AI accountability.
- AI audits and assessments impose costs, particularly burdensome for smaller businesses. Regulatory frameworks should complement existing workflows rather than add administrative burden.

NTIA-2023-0005-1264

online comments

Please find attached comments of CTIA.

attached document

- The Department of Commerce National Telecommunications and Information Administration (NTIA) is working on a report on AI accountability policy.
- The report will emphasize the benefits of AI in various sectors including commerce, health, transportation, cybersecurity, and the environment.
- NTIA advocates for a harmonized, federal, risk-based approach to AI policy, in line with the National Institute of Standards and Technology’s AI Risk Management Framework.
- NTIA believes that existing laws and regulations should be considered before implementing new audit or assessment requirements for AI.
- The report will promote a risk-based approach to AI accountability, considering both the risks and benefits of AI use cases.
- NTIA calls for federal leadership on AI policy to avoid conflicting AI regulations at federal and state levels.
- The lack of federal harmonization across sectors is seen as a barrier to innovation and leadership in AI technology.
- NTIA suggests that a voluntary, flexible, and risk-based approach to managing AI risks should be promoted.
- NTIA recommends the adoption and use of the AI Risk Management Framework developed by NIST.
- The report, submitted on June 12, 2023, is a collaborative effort of Thomas C. Power, David Valdez, Avonne S. Bell, and Justin Perkins from CTIA.

NTIA-2023-0005-1207

online comments

See attached PDF.

attached document

- Wilhelmina Randtke spoke to the National Telecommunications and Information Administration about AI Accountability Policy.
- Her focus was on question 19, which pertains to public expectations for audits and assessments of AI systems in public programs.
- Randtke stressed the significance of accountability in democratic government, achieved through the rulemaking process.
- She proposed that software implementing government law or policy should undergo the rulemaking process, as it is a form of rule.
- Randtke highlighted that bypassing the rulemaking process when transitioning from paper to electronic processes is a violation.
- She stated that software making decisions about individuals is a rule, as it applies logic consistently to many people.
- Randtke argued that if an agency relies on a piece of code in most cases, it should be considered a rule and should undergo the rulemaking process.
- She maintained that the complexity of the software, the use of a contractor, or the transition from paper to electronic does not exempt it from the rulemaking process.
- Randtke emphasized that the rulemaking process is well established and can accommodate speedy action when needed.
- She concluded by urging federal agencies to send all decision-making software used by the government through the existing and mandatory rulemaking process.

NTIA-2023-0005-1190

online comments

- Knowledge Ecology International (KEI) opposes provisions in US trade agreements, such as the USMCA Article 19.16: Source Code, that limit government's ability to enforce software code or algorithm transparency.
- KEI highlights the growing interest in the role of algorithms and AI services, and the associated societal risks.
- The organization supports the NTIA's request for comments on AI Accountability Policy.
- KEI advocates for policy makers to require transparency in software code and algorithms in certain areas and topics.
- The organization argues against trade agreements that broadly prohibit government from mandating transparency where there is a compelling case for government intervention.
- KEI believes such transparency can mitigate harm and promote welfare-enhancing policies that make services more trusted, useful, or affordable.

attached document

- Knowledge Ecology International (KEI) is a non-profit organization that researches and evaluates domestic and international policies and norms.
- KEI has raised concerns about provisions in several plurilateral trade agreements that restrict the U.S. and other governments' ability to require access to software source code or its corresponding algorithm.
- These provisions are found in the Trans Pacific Partnership (TPP), the Trade in Services Agreement (TiSA), and the Agreement between the U.S., Mexico, and Canada (USMCA).
- KEI believes these provisions are overly restrictive and lack adequate exceptions, even for software licensed under obligations to make its code public.
- KEI points out that the growing interest in algorithms and AI services has revealed numerous societal risks, many of which are addressed in the NTIA's request for comments on AI Accountability Policy.
- KEI proposes that policy makers should mandate transparency in software code and algorithms in certain areas, and this should not be prohibited by trade agreement provisions.
- KEI notes that the European Commission has established a European Centre for Algorithm transparency and many groups are advocating for transparency measures in AI services.
- KEI concludes that it is unwise to enter into trade agreements that broadly prevent governments from requiring transparency in areas where there is a strong case for government intervention.

NTIA-2023-0005-1152

online comments

See attached file(s)

attached document

- The consortium highlights the need for clear definitions of terms like recourse, accountability, transparency, and audit in the context of AI.
- The consortium proposes a four-level audit for AI systems, including data quality, model outputs, real-world performance, and system impacts.
- The consortium emphasizes the need for accountability methods to be tailored to the specific use case of the AI system.
- The consortium stresses the importance of validating code and data early in the process to prevent biased decision-making.
- The consortium suggests using existing models, such as the classification of human-AI interactions, to guide the level of audit and evaluation for AI technologies.
- The consortium calls for increased transparency from organizations that create and use AI systems, particularly regarding flaws in their models.
- The consortium suggests that a neutral third party could be given access to data and algorithms to ensure accountability in AI systems.
- The consortium warns of the potential high cost of oversight systems and the risk of these costs being passed on to consumers.
- The consortium recommends referring to the CCC’s past workshop report on Assured Autonomy for further insights.
- The consortium responded to a Request for Comment on AI Accountability Policy, emphasizing the complexity and diversity of AI systems.

NTIA-2023-0005-1218

online comments

See attached file(s)

attached document

- Chegg, an education technology company, has submitted its views on the development of AI audits, assessments, and certifications to the NTIA.
- The company uses AI in networking, data storage, and communications solutions and is interested in promoting governance and policies for AI development and growth.
- Chegg believes that NTIA needs to balance innovation and end user interests in developing AI accountability measures.
- The company suggests that AI accountability mechanisms should inform users about AI tools' operations and compliance with trustworthy AI standards.
- Chegg supports AI auditing, which involves monitoring the system for bias, errors, and other issues.
- The company suggests that certification of external industry-led standards and internal codes of conduct can signal AI tools' compliance with trustworthy AI standards.
- Chegg believes that any definition of AI should be adaptable and flexible, as rigid definitions can become outdated due to the rapid evolution of AI.
- The company suggests that NTIA should focus on advancing transatlantic cooperation and regulatory interoperability for a standardized approach to AI oversight.
- Chegg expresses concern that mandated independent third-party auditing of AI systems may lead to national security concerns, trade secret theft, and inaccurate auditing.
- The company believes that a federal privacy law should be part of any AI framework and that Congressional action is the best approach to crafting federal privacy rules.

NTIA-2023-0005-1210

online comments

See attached file(s)

attached document

- Unlearn.AI, Inc. is responding to the NTIA AI Accountability Request for Comment published on April 13th, 2023.
- Unlearn's mission is to advance AI to eliminate trial and error in medicine, particularly in clinical trials.
- Unlearn believes AI accountability mechanisms like certifications, audits, and assessments are crucial to increase public trust in AI.
- AI accountability mechanisms should be enforced by the specific governmental agency that regulates its context-of-use, such as the FDA for AI used in drug development and manufacture.
- The level of accountability mechanisms should depend on the level of risk involved.
- The company believes that governmental agencies should create policies that can accommodate the evolving uses of AI/ML.
- unlearn.ai suggests that accountability mechanisms with heavy requirements on explainability could limit the development of trustworthy AI systems.
- They argue that terms frequently used in accountability policies, such as fair, safe, effective, transparent, and trustworthy, are too general and should be used appropriately in different sectors.
- unlearn.ai believes that government policy in the AI accountability ecosystem should be sectoral, based on the context-of-use associated with the AI technology.
- The company advocates for uniform AI accountability requirements within respective sectors.

NTIA-2023-0005-0298

online comments

- The user's issue is not with AI technology itself, but with how it's being used to sample works from artists, creating competition.
- AI programs have been collecting from artists' portfolios without their consent, leading to instances of artists' work being imitated without credit or compensation.
- The user refutes arguments defending this practice, such as the notion that AI art is "transformative" and that machine learning is similar to human learning.
- The user also dismisses the argument that the amount of data sampled from artists' work is minuscule, comparing it to stealing a large sum of money in small increments.
- The user suggests that AI art programs should be restricted to sampling work that is in the public domain, fairly purchased, or volunteered from willing artists.
- The user argues that the development of new technology should be guided by consideration and compassion, and that the current use of AI in art infringes on the rights of artists.
- The user expresses frustration at being criticized for complaining about the misuse of their intellectual property and the potential for their work to be devalued.
- The user emphasizes that for many artists, art is a business and they should have control over how their work is used and presented.
- The user is not demanding the elimination of AI technology, but rather its responsible use.

NTIA-2023-0005-0299

online comments

- AI Art and AI-made materials are considered unethical, unsafe, and untrustworthy.
- These technologies can use the hard work of artists without their permission or consent.
- This practice infringes on artists' rights, who often struggle to maintain control over their intellectual property.
- Allowing AI to control costs unregulated could negatively impact people's livelihoods and potentially lead to job losses.
- AI art is deemed untrustworthy as it uses non-consenting individuals' work for profit-making, in which they had no involvement.

NTIA-2023-0005-0291

online comments

- AI is a versatile tool with potential for extreme harm.
- AI learning models often use both free and privatized content, which can be seen as theft from creators.
- There is a call for legal requirements to state when a product is made by AI.
- Large penalties are suggested for failure to disclose AI involvement in a product's creation.
- Misuse of AI, such as claiming AI artwork as one's own or using it for political manipulation, is a major concern.
- Urgent action is needed to regulate AI usage and prevent potential misuse.

NTIA-2023-0005-0292

online comments

- Generative AI's output quality and versatility depend heavily on the training data it uses.
- These algorithms are often trained on copyrighted content from the internet, used without the consent of the owners, which is unethical and illegal.
- The AI does not understand concepts like lighting, form, or 3D space, but rather uses statistical occurrence in its training data to generate new images.
- A large amount of training data is needed for the AI to generate accurate and realistic content, such as deepfakes or 3D models.
- The amount of content that generative AI apps must have used without permission is likely in the tens of millions, as indicated by data on user consent.
- The misuse of copyrighted content can lead to job losses in creative industries and is a violation of intellectual property rights.
- Generative AI can also be used for harmful purposes, such as creating explicit content, faking images of conflicts, or deepfaking people into saying or doing things they didn't.
- While AI algorithms can be beneficial in certain situations, the current use of generative AI is unethical, illegal, and potentially harmful.
- The author urges for close regulation of unethical, illegal, and dangerous AI practices.

NTIA-2023-0005-0294

online comments

- AI writing and art systems are currently not ethical, safe, or trustworthy.
- The spread of disinformation could be exacerbated by AI use, a concern for lawmakers.
- AI systems like ChatGPT, used in search engines, are susceptible to misinformation as they rely on the same sources as a search query.
- AI can provide incorrect or harmful information due to lack of context or human thought.
- AI art/image systems like Midjourney can generate realistic images based on user input, which can be used to spread misinformation.
- Instances of AI-generated images being mistaken for real events have already occurred, such as images of President Trump being chased by police.
- There is a need for regulation of AI to prevent the spread of misinformation.

NTIA-2023-0005-0295

online comments

- AI is considered unsafe and unethical by some creators due to its ability to learn from content without human consent.
- There are concerns that AI could steal creative work from individuals.
- The use of AI has already had significant impacts on various creative industries.
- The Writers Guild of America (WGA) is striking partly due to the increasing use of AI, along with issues of unfair pay.
- There is a fear that large corporations could replace human workers with AI, further increasing their profits.
- The idea of being replaced by a non-sentient machine is a source of anxiety for some people.

NTIA-2023-0005-0300

online comments

- AI image creation is currently a crude mash-up machine that exploits the work of artists.
- The technology uses the intellectual property of artists, most of whom did not consent to their work being used in this way.
- In the current WGA strike, artists are protesting proposals that would allow studios to use AI to write first drafts and then have WGA members revise them for reduced pay.
- The AI-generated writing is based on the work of the writers that the studios are trying to undercut.
- Using AI in this way is equivalent to stealing an artist's work, altering it slightly, and then demanding they accept less pay because the studio also worked on it.
- Artists are questioning why they should support a technology that profits from their work without compensation or credit.
- Artists are also questioning why they should accept less pay when the technology that seeks to replace or streamline their work is based on the theft of their labor.

NTIA-2023-0005-0307

online comments

- The user is an artist concerned about the use of AI in the art world.
- AI has been used in various fields, including video games and search engines.
- AI art programs generate images based on word prompts and image data, often stolen from artists.
- The user argues that AI-generated images do not qualify as art, as they do not involve human creativity or imagination.
- The user compares AI art to a non-baker using a machine to combine slices of cake from skilled bakers, then claiming credit for the resulting cake.
- The user is not against AI, but against the unethical use of AI art programs that steal from artists and claim credit for their work.
- The user notes that this practice is harmful to artists' livelihoods, as it takes jobs and earns money off stolen work.
- The user calls for laws to protect artists from the malicious use of AI art programs and to make the use of stolen art in AI-generated images punishable by law.
- The user believes that people should be held accountable for illegally and unethically using artists' work in AI programs.
- The user requests action against the unethical use of AI in the art world.

NTIA-2023-0005-0308

online comments

I do not believe that AI as it stands in the hands of Silicon Valley and its obsessive followers will bring anything good into this world.

NTIA-2023-0005-0310

online comments

The U.S. government should protect the livelihood of the country's artists hy restricting general usage for AI image/text generators.

NTIA-2023-0005-0319

online comments

- The user has a strong interest in Artificial Intelligence (AI) and has been studying it for a long time.
- The user is currently pursuing a computer science degree with a concentration in AI in online security.
- The user believes that tech CEOs and financiers do not fully understand the technology they use and promote.
- The user criticizes the misuse of AI in storing data from art and writing without proper rights.
- The user argues that technology should enhance our lives and enable us to pursue creative endeavors freely.
- The user criticizes the current art datasets, like LAION-5B, for containing billions of "scraped" art pieces without the original artists' permission.
- The user believes this is a copyright violation and unacceptable, and that datasets should only contain Royalty Free/Creative Commons art or art that has been commissioned and paid for.
- The user insists that original artists must be compensated per use if their art is included in a dataset.
- The user warns against AI becoming a "cheap way to create art," which could harm underpaid artists.
- The user believes AI should work in conjunction with artists to elevate them and pay them fairly.
- The user argues that the use of a dataset should benefit the artists, not those hosting the dataset or the software.
- The user insists that if the technology cannot accommodate fair compensation for artists, it should not be used.

NTIA-2023-0005-0321

online comments

AI is taking away to many jobs, before long half of the jobs will be taken over by AI, where does that leave the people. This stiff has to be regulated before everyone is out of work.

NTIA-2023-0005-0323

online comments

- The government should establish an independent committee to regulate AI creation and usage.
- The committee should have powers similar to the FDA to assess the accountability of AI models.
- The committee should develop regulations to evaluate AI models based on expert research in the field.
- The government should fund this research.
- The committee should have various subcommittees focusing on specific sectors such as research, education, business, public awareness, and certification.
- Brazil's national AI policy is a good example of how this can be implemented, with guidelines and actions across six pillars: education and capacity building in AI; AI research, development, innovation and entrepreneurship; AI applications in the private sector, government and public safety; legislation, regulation and ethical use of AI; governance of AI; and international aspects of AI.
- China also has a similar policy, specifically targeting ethics and social responsibility in each of their sectors.

NTIA-2023-0005-0327

online comments

- User expresses deep concern about the lack of AI regulation in the US.
- User is both a consumer and a professional audiobook narrator and artist.
- Tech companies are criticized for harvesting large amounts of data without permission, including from copyrighted sources.
- The technology offered by these companies is viewed as a Trojan horse, tricking humans into training more untested and unethical tech.
- The user believes this technology is potentially unsafe for the future of humanity.
- The user strongly urges for stringent regulation of AI technology.

NTIA-2023-0005-0334

online comments

- Concerns about AI include potential theft of creators' work without compensation or consent.
- AI could be used by media companies to justify paying creators less, potentially making creative work untenable.
- There is currently no requirement for companies to disclose if AI was used in the creation of a piece of work.
- Uncertainty exists around copyright law and whether AI-generated text should be copyrightable, potentially leading to extensive litigation.
- The need for clear provenance of text/images used by AI to ensure fair use and compensation.
- AI-assisted writing still requires human intervention for revising and polishing, which should be fairly compensated.
- The potential for AI to put creative workers out of business due to reduced pay rates.

NTIA-2023-0005-0097

online comments

AI should surely be regulated in some way. But I'm afraid I don't specifically know how.

NTIA-2023-0005-0104

online comments

In the development of AI, equality should not be considered extra but rather an essential component of the minimum standard for evaluation. Additionally, any resources allocated towards ensuring the legality, safety, effectiveness, and non-discriminatory nature of artificial intelligence are not a tradeoff but a worthwhile investment.

NTIA-2023-0005-0130

online comments

- AI products can only be ethical if the database consists of components willingly provided by rightful owners.
- Copyrighted works should not be used for profit by third parties.
- All AI creations should be automatically watermarked by the algorithm.
- The database should switch to stock files provided by creatives, who should be rewarded for their contributions.
- Many websites allow uploads to AI databases without clear consent from users.
- The default option should be OPT-OUT, with OPT-IN being a voluntary choice.
- Websites should clearly communicate these options to users, not hidden in fine print.

NTIA-2023-0005-0134

online comments

- AI is a product of the information it is fed.
- The use of artists' work without permission or credit in AI negatively impacts these artists.
- AI in the field of entertainment may remove the human element intrinsic to art.
- Artists and their work should be legally protected and compensated.

NTIA-2023-0005-0114

online comments

AI is dangerous and will take away jobs in the future. We will end up serving the AI instead of the other way around, and nobody will benefit from this in the long run. Make AI accountable and limit what it can do so it can remain a useful tool that's not taking any jobs.

NTIA-2023-0005-0118

online comments

- AI is currently untrustworthy due to misuse of data.
- Organizations and individuals advancing AI are using stolen data without considering copyright laws.
- For fair and equitable use of AI, data sets should be opt-in rather than opt-out.
- There should be a verifiable, traceable system to track what data AI uses for its creations.
- Strong regulations on AI, Machine Learning (ML), and Large Language Models (LLM) are needed to prevent copyright breaches.

NTIA-2023-0005-0122

online comments

- AI technologies pose a threat not only to jobs of artists and creatives but also to the fabric of society, with potential misuse for fabricating information, impersonating real people, and producing harmful content.
- The sophistication of AI technology makes it harder to identify false and synthesized information, enabling bad actors to create specific and mass-disinformation.
- Copyright law is intended to protect an author's sole rights to exploit their creations, including derivatives of their authored creations.
- In the digital world, all art, writing, code, etc., is interpreted by computers as a binary sequence of 1's and 0's, which is also protected by copyright.
- The author retains copyright over the binary sequence representing their creation, even if it has been altered by computer instructions such as compression.
- AI models are derivative products of their training data, obtained by performing computer instructions on the data.
- Each piece of training data is nonfungible and contributes to the final model, making the model directly derivative of the data.
- The Copyright Office's stance that a human can claim copyright over an AI-generated product if they perform sufficient work on it should be reconsidered.
- The final human-worked product is a derivative of the AI-generated product and can usurp the market of the original product, infringing on the original author's copyright.

NTIA-2023-0005-0136

online comments

Generative AI, especially in it's current form, is predicated on taking the work of countless others and just turning it into a slightly different product. It is important to protect creators and other workers from Generative AI's stealing of their hard-work.

NTIA-2023-0005-0143

online comments

- There are ethical concerns surrounding the use of AI, particularly in relation to the use of other people's work without consent.
- The internet community often overlooks the need for permission before using and profiting from someone else's work, which can be seen as theft.
- There is a need for restrictions on AI to prevent misuse and unethical practices.
- The rapid development of AI technology could potentially replace human labor, leading to job losses.
- The quality of AI-generated content is often inferior to human-created content, leading to bizarre and nonsensical results.
- The principle of "work smarter, not harder" should not justify the use of AI if the results are significantly inferior to human effort.

NTIA-2023-0005-0176

online comments

- The term "AI Art" is a misnomer; it should be termed as machine gathered learning (ML) as it uses data from original artworks to generate images.
- Programs like Midjourney, Stable Diffusion, and Dall-E are used to create AI art, with some companies valued at a billion dollars.
- The AI art trend is unregulated and represents a cheap alternative to traditional illustration or animation, which is detrimental to career artists.
- AI art programs can generate hundreds of images in a day, while an artist may spend weeks creating a single image.
- Many artists have been let-go from their jobs in creative fields due to the rise of AI art.
- AI art is akin to taking one page from 700 different books, binding them together, and calling it a new work.
- AI image generators can recreate artwork nearly perfectly in the style of the artist because it uses the original artwork.
- Regulation of AI art is necessary to prevent job losses and potential economic crisis.

NTIA-2023-0005-0195

online comments

- The user is an artist who believes AI is currently a threat due to lack of regulation.
- AI can use any image from the internet, including copyrighted material, without compensation to the original creator.
- This issue affects both small independent businesses and major corporations with strict copyright laws.
- From a humanistic perspective, AI can be used for identity theft, fraud, libel, etc.
- The user questions the reliability of professionals or important entities using AI-generated content.
- There are concerns about potential misuse of AI for criminal activities, such as generating explicit content involving minors.
- While acknowledging the potential benefits of AI, the user emphasizes the need for regulations and laws to manage its rapid evolution and potential misuse.

NTIA-2023-0005-0279

online comments

Ai is stealing from hard working people, hope this helps !!!

NTIA-2023-0005-0289

online comments

- Robert Cruz is a resident of Riverside County, California, and a regular voter for California's 25th congressional district.
- He is not college educated and works as an independent contractor in the creative sector across the United States.
- Cruz has been in correspondence with professionals in his industry since 2015.
- He is concerned about the ethics of Generative AI, which he describes as an imitative digital scanner that creates art by copying and rearranging pixels from existing images.
- Cruz argues that Generative AI lacks intelligence and can be used to create convincing deepfake imagery, potentially implicating individuals in criminal acts.
- He warns that such technology has already been used to create doctored images to spread hate against LGBTQ+ people.
- Cruz fears that Generative AI could be used to sabotage political campaigns in the future.
- He also expresses concern about the impact of Generative AI on the creative sector, predicting it could lead to widespread job loss and poverty.
- Cruz calls for the regulation of AI to prevent the circulation of harmful images and the establishment of strict ethical laws around the use of Generative AI.

NTIA-2023-0005-0290

online comments

- The user is an artist concerned about the security of AI in relation to Non-Disclosure Agreement (NDA) files.
- Many companies require employees to sign NDAs, but AI databases often draw from public resources.
- The user fears that AI, if trained on NDA files, could potentially leak sensitive information.
- This could include text, images, photos, or specific game details.
- The user perceives AI as having a security leak due to its unsecured database for reference.
- The user references "The Chinese Room" experiment as a metaphor for how AI works, suggesting AI can pull files but lacks understanding of nuances, such as Chinese slang.

NTIA-2023-0005-0206

online comments

- The use of AI in policing should be prohibited due to the risk of increased false arrests and incarcerations.
- AI systems have documented shortcomings, particularly in relation to people of color.
- AI technology is being misused to create false nude photos of non-consenting individuals, posing a significant privacy violation.
- The misuse of AI can lead to potential dangers such as revenge porn.
- There are concerns about copyright violations, as the works of artists and writers are being used without consent to train AI systems.

NTIA-2023-0005-0228

online comments

- AI-generated art often samples and steals aspects of human-created art without permission or notification.
- The original artists, sometimes deceased, have no way of submitting a complaint about their stolen work.
- The person who generates the AI art sells it, effectively selling another artist's work without their consent.
- The user supports AI learning patterns and colors to create unique artworks, but calls for regulations and safeguards to prevent art theft.
- Suggests that sites using AI generation methods should have simpler ways for artists to opt out and remove their art from the reference pool.
- Recommends that inactive accounts should have their art removed after a certain period of inactivity to protect deceased artists' work.
- Urges for government intervention to protect artists' rights to their artwork.
- Warns that if this trend continues unchecked, art could disappear from the internet, replaced by AI-generated works that are essentially stolen.

NTIA-2023-0005-0316

online comments

- AI has been used by corporations to replace human jobs such as translators, coders, writers, and artists.
- AI training often involves using work from these professionals without proper compensation or licensing, violating copyright laws.
- AI has been used in facial recognition software and self-driving cars, but often inherits the biases of its creators.
- There have been issues with AI not accurately identifying faces of non-white individuals, leading to potential wrongful arrests.
- Self-driving cars like Tesla have had issues recognizing dark figures and children, leading to accidents.
- AI devices like Amazon's Alexa have been used to collect and sell sensitive user data.
- There is a need for ethical oversight and diversity in AI development to ensure biases are addressed and different experiences are considered.
- Without proper management and regulation, misuse of AI could lead to a loss of faith in technology.

NTIA-2023-0005-0246

online comments

- The user expresses concern over the increasing use of AI in replacing human jobs, particularly in creative fields like art and writing.
- The user observes that AI tools are often used to steal original art without repercussions, which harms small businesses.
- The user argues that AI should not be used to make decisions for humans, replace artists and writers, or make important government decisions.
- The user believes that AI is being used as a tool to disenfranchise creators and push people out of the work they love.
- The user suggests that AI should be used for labor that humans cannot do, rather than replacing human creativity and decision-making.
- The user calls for more research into the societal effects of AI and protection for artists and writers affected by its misuse.
- The user expresses a desire for human-created movies, books, and articles, and believes that most Americans share this sentiment.
- The user criticizes large corporations for wanting to use AI to replace human workers, rather than supporting them and improving their quality of life.
- The user argues that AI should be used to replace hard labor, freeing up humans to engage in creative pursuits.
- The user emphasizes the irreplaceability of artists and the need to protect small businesses, artists, writers, and workers from predatory AI programs.
- The user concludes with a plea to protect art and writers, and a call to recognize that replacing human jobs with AI for cost-saving purposes will not lead to happiness.

NTIA-2023-0005-0258

online comments

- AI is being used to create art, music, literature, and other creative mediums, often without the consent or credit of original artists.
- The lack of regulation around AI allows for potential exploitation of artists by corporations.
- This exploitation can lead to mass-produced, lower-quality content.
- AI cannot replicate the unique process of an artist.
- There is a need to protect the livelihood, careers, and creations of artists from AI misuse.

NTIA-2023-0005-0271

online comments

- Michael Schwarz, a key figure in AI development, warns about the potential misuse of AI by bad actors, despite advocating for continued development without regulation.
- AI researchers often dismiss concerns about AI misuse, instead focusing on potential benefits such as curing cancer, solving world hunger, and addressing the climate crisis.
- There is no definitive proof that AI can achieve these lofty goals, and current AI systems are known to produce incorrect results.
- AI has already been misused in cases related to intellectual property, such as Google Bard scraping content from web pages, and image generators recreating copyrighted material.
- AI has been used in fraud and propaganda, with deepfake technology being a notable example. According to Regula, 37% of organizations have experienced voice fraud and 29% were victims of deepfake videos.
- The misuse of AI could potentially erode trust in institutions and complicate the criminal justice system, as it becomes increasingly difficult to verify the authenticity of recorded evidence.
- Economically, the use of AI could lead to significant job losses, with some experts estimating that up to 80% of jobs could be lost. Proponents argue that automation will create new jobs, but this does not account for the potential for fully autonomous technology.
- The integration of AI into businesses could lead to the outsourcing of unethical and discriminatory practices to automated systems, providing a layer of plausible deniability for corporations.

NTIA-2023-0005-0268

online comments

- The user is a professional illustrator who shares their work online for free viewing.
- They retain ownership of their work and charge differently for personal and business use.
- Business use is more expensive as clients are buying the intellectual property for their own profit-making purposes.
- The user feels AI infringes on their rights by using their work without compensation or knowledge.
- They believe AI's open sourcing is harmful to anyone who posts content online, both for work and leisure.
- The user is also concerned about the violation of personal privacy and identity ownership, as AI interfaces use pictures of people's children, families, and friends.
- They urge consideration of their experiences and fears about AI.

NTIA-2023-0005-0500

online comments

- The user expresses concern over the rapid evolution of Generative AI and its impact on various industries and individuals.
- The user believes that without proper regulation and ethical oversight, this technology could lead to significant changes in creative industries and the lives of ordinary people.
- The user acknowledges the potential benefits of AI but insists on the importance of safe, ethical use without malicious intent.
- The user criticizes AI companies for prematurely releasing systems to the public, scraping the internet for data without consent, and implementing an automatic "opt-in" policy.
- The user highlights the negative consequences of these practices, including job loss, mental distress, and violation of privacy.
- The user mentions specific instances of misuse, such as deepfakes in pornographic content, voice actors being replaced by AI, and the creation of child pornography using AI.
- The user calls for immediate action to regulate this technology, protect copyrights, and ensure a secure future for those affected.
- The user argues against self-regulation by tech companies, advocating instead for the public to have a say in how their data is used.
- The user demands data privacy and protection, as well as the rights to their own face, voice, and words, whether digital or otherwise.
- The user has previously held a presentation on the dangers of Generative AI and has provided supporting materials for further reference.
- The user urges immediate action to address these concerns.

attached document

- Private medical record photos were discovered in a widely used AI training data set.
- OpenAI reportedly paid Kenyan workers less than $2 per hour to improve the toxicity of ChatGPT.
- The AI competition between Google and Microsoft is moderated by minimum wage 'ghost' workers.
- Concerns have been raised about AI tools like ChatGPT discouraging students from developing their own writing and thinking skills.
- AI tools are being utilized to create wedding vows.
- Amazon's new AI art tool has sparked debate over its potential to either aid children in creating art or hinder their creativity.
- AI and machine learning are being used by scammers to create more convincing phishing attacks.
- Deepfake technology is being exploited to scam people by imitating the voices of their loved ones.
- Ethical issues have been raised regarding uncensored AI art models and deepfakes.
- The United States Copyright Office is grappling with copyright issues related to AI-generated content.

NTIA-2023-0005-0633

online comments

See attached file(s)

attached document

- Holistic AI, an AI Governance, Risk and Compliance platform, has responded to the NTIA’s request for comment on AI system accountability measures and policies.
- The company supports NTIA’s call for AI Accountability to establish a robust infrastructure of harm assessment and mitigation.
- Holistic AI has a multidisciplinary team of AI and machine learning engineers, data scientists, ethicists, business psychologists, and legal and policy experts.
- The company has reviewed all 34 questions provided in NTIA’s request for comment and provided several recommendations.
- Holistic AI believes that AI accountability mechanisms are paramount to understanding system harm assessment and mitigation and are critical for eliciting user confidence.
- The company conducts independent and impartial AI audits and offers proprietary software as a service platform for AI governance, risk management, and regulatory compliance.
- Holistic AI believes that quality assurance certifications should be mandated and presented in a clear, concise and non-jargon heavy manner.
- The company is open to working with NTIA and others to further develop sector specific accountability frameworks as prescribed by law and emerging regulatory regimes.
- Holistic AI suggests that AI accountability results should be standardized, made transparent, and communicated concisely to the public.
- The company recommends a collaborative relationship between government, industry and academia to innovate and test new methods.

NTIA-2023-0005-0698

online comments

- The R Street Institute has provided comments on AI Accountability Policy.
- They have also released a new report titled "Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence."
- The report discusses strategies to "professionalize" AI ethics.
- It explores the potential role of algorithmic audits and impact assessments in the professionalization process.
- The relevant discussion can be found on pages 27 to 33 of the report.
- The information was provided by Adam Thierer, a Resident Senior Fellow at the R Street Institute.

attached document

- The R Street Institute has responded to the NTIA's request for comment on AI Accountability Policy.
- The institute recommends prioritizing the potential benefits of AI and avoiding policies that could hinder innovation.
- The institute suggests identifying barriers to AI innovation and considering the costs of new AI policies for smaller enterprises and open-source systems.
- The global competitiveness and national security implications of AI policy should be considered, as well as statutory and constitutional constraints.
- The institute encourages the use of AI audits and algorithmic impact assessments, but warns against making them mandatory due to the measurement challenges involved.
- The administration should consider the broader issues and trade-offs associated with the limitations of algorithmic audits and impact assessments.
- Recommendations for the NTIA include building on the steps taken by the National Institute of Standards and Technology (NIST) in its "Artificial Intelligence Risk Management Framework".
- Policymakers should not presume a one-size-fits-all approach to algorithmic governance and should encourage competition and innovation among market players.
- AI policymaking must be risk-based and highly context-specific, focusing on humility, agility, and adaptability.
- The administration should foster the development of trustworthy algorithmic innovations that benefit the public and keep the U.S. at the forefront of the next great technological revolution.

NTIA-2023-0005-0408

online comments

- The user is concerned about the increasing prevalence of AI technology, particularly its ability to generate realistic images and voices.
- Examples of AI's deceptive capabilities include the "pope dripped out" image and an incident where a user fooled Alex Jones with an AI-generated voice of Tucker Carlson.
- The user is worried about the potential misuse of this technology, including its potential to deceive and manipulate.
- There is also concern about the impact of AI on artists and writers, as it could replace human creativity and expression, leading to job loss and a less diverse art world.
- The user advocates for strict regulations on AI technology to prevent misuse and protect human creativity and jobs.

attached document


NTIA-2023-0005-0540

online comments

See attached file(s)

attached document

- Transparency and accountability are crucial in AI operations, particularly in large language models like ChatGPT.
- Users should be educated on how AI technologies function and their compliance with reliable AI standards as part of accountability mechanisms.
- AI accountability should encompass model interpretability, transparency in data usage, and independent audits of model behavior.
- Model interpretability can enhance user trust by providing insight into how AI models make predictions or judgments.
- Clear explanations of AI models' limitations, biases, and operations are essential.
- For AI to be considered a valid public good, robust regulations need to be implemented.
- Users need to be aware of the sources, potential biases, and data used to train models, making transparency in data usage a key aspect of AI accountability.
- External audits of model behavior can enhance AI accountability by evaluating the model's adherence to reliable AI principles.
- Public disclosure of audit results can boost public trust and enable users to make informed decisions about the AI tools they use.
- A feedback loop for reporting harmful results or biases is a vital part of AI accountability.

NTIA-2023-0005-0640

online comments

You can find our Comment in the attached file. We thank you for this opportunity.

attached document

- Anthropic is providing feedback to the National Telecommunications and Information Administration (NTIA) on its AI Accountability Policy Request for Comment (NTIA-2023-0005).
- The goal of the RFC is to promote greater accountability for artificial intelligence (AI) systems.
- Anthropic's submission presents a perspective on the infrastructure needed to ensure AI accountability.
- Recommendations consider the NTIA’s potential role as a coordinating body that sets standards in collaboration with other government agencies.
- Anthropic is an AI safety and research company focused on creating reliable, interpretable, and steerable AI systems.
- The company's legal status as a public benefit corporation allows it to prioritize societal impact over shareholder value.
- Techniques like model cards, model values transparency, watermarking, and model evaluations can increase accountability and oversight for AI systems.
- Research in AI would benefit from increased collaboration across industry, government, academia, and other stakeholders.
- AI models can perform complex tasks once fine-tuned on data specific to that task, which requires auditors to have the ability to fine-tune models themselves to determine their full range of capabilities.
- The government should fund initiatives like research into capabilities and safety evaluations, interpretability research, and increasing access to large-scale computing resources for academia and civil society.

NTIA-2023-0005-0630

online comments

See attached file(s)

attached document

1. The document is a response to the Request for Comment (RFC) AI Accountability Policy NTIA–2023–0005, discussing various aspects of autonomous technologies.
2. It contains responses to 34 questions related to AI accountability mechanisms and explores the scoping of AI accountability measures depending on the risk of the technology.
3. The document introduces the concept of a Quantum Hash Code (QHC), a bit-encoding technology for a space-time footprint, and suggests that sensor data can be enhanced with QHC-based metadata.
4. It proposes a next-generation space-time networking protocol (SPIF) that combines aspects of QUIC and DTN with QHC-based space-time footprints.
5. The document discusses the concept of Space-Time Entity Identity and introduces the idea of a next-generation entity-based naming service (NG-ENS).
6. It discusses the use of blockchain for recording transactions via smart contracts in autonomous commerce services and introduces the concept of Autonomous Commerce Frameworks.
7. The document emphasizes the need for AI Ethical Guardrails in autonomous commerce services.
8. It provides definitions for various terms related to artificial intelligence, digital technology, and geospatial intelligence, including AI, Blockchain, Geospatial intelligence (GEOINT), Internet-of-Things (IoT), Machine Learning (ML), and Quantum computing.
9. The document provides information about Great-Circle Technologies, Inc. (GCT), an innovative analytic solutions provider that holds six IT patents related to the automation of sensor data workflows and the encoding of situational context to enable machines (AI/ML) to reason autonomously on that data.
10. GCT maintains three lines of business: Professional services including various subject matter experts (SME), commercial product sales, and Solutions-as-a-Service (SolaaS) sales.

NTIA-2023-0005-0620

online comments

See attached file(s)

attached document

- Quality Plus Engineering (Q+E) is a professional engineering and risk assurance company with over 30 years of experience in Lisp and rules-based systems.
- AI accountability mechanisms should cover high-risk, public-facing, decision-making AI and require legal standards, risk acceptance thresholds, and risk acceptance levels.
- Adequate transparency and explanation about the uses, capabilities, and limitations of the AI system need to be provided to affected people.
- Trustworthy principles and guidelines for AI need to be operationalized and assured.
- The International Financial Reporting Standards (IFRS) and International Sustainability Standards Board (ISSB) are setting standards for Environmental, Social, and Governance (ESG) assurance systems.
- Q+E provides security systems in compliance with various standards including IEEE, PPD, NFPA, ISA, PMI, FISMA, ISO, NIST, CARVER, COSO, NERC, API, AGA, RAMCAP, RAM-T, FERC/NERC guidance, and ASIS.
- Blockchain and other technologies may aid in AI verifiability, transparency, traceability, and ownership.
- AI applications should be tiered based on their public-facing nature and decision-making risk, with higher risk AI requiring more assurance and accountability.
- Developers and deployers of AI systems should keep records such as logs, versions, model selection, and data selection to support AI accountability.
- Government policy should play a role in the AI accountability ecosystem by developing understandable and applicable rules for managing, planning, conducting, and reporting risk-based AI audits.

NTIA-2023-0005-0530

online comments

- AI-generated reports falsely claiming Pentagon and White House explosions have caused public panic and a drop in the stock market.
- Examples of these false reports are being spread online, further exacerbating the issue.
- AI experts are calling for Federal regulation to prevent such incidents.
- Despite these calls, big tech companies are resisting regulation in favor of potential AI profits.
- The user urges for immediate regulation of AI to prevent further harm.
- The user identifies as a concerned social media user, protective family member, and worried citizen of the democracy.

attached document


NTIA-2023-0005-0587

online comments

See attached file(s)

attached document

- The event took place at the National Telecommunications and Information Administration in Washington, DC.
- Kathy Yang, a Computer Science student at Princeton University, responded to the NTIA's request for comments on AI accountability policy.
- Kathy Yang addressed three main points: existing data-related concerns, the trade-off between data transparency and privacy/security, and the potential and limitations of synthetic datasets.
- The "garbage in—garbage out" concept was discussed, emphasizing the importance of data quality in AI systems.
- The balance between having more complete data and maintaining privacy and security was discussed.
- Synthetic datasets, generated using mathematical models or algorithms, could help address data gaps and privacy concerns.
- The quality and completeness of the training data used to generate synthetic data must be considered.
- The release of synthetic datasets for AI auditing must be done carefully to avoid compromising sensitive information.
- Policy recommendations include holding AI developers accountable for data quality, completeness, and transparency.
- Other recommendations include requiring developers to disclose information on their training data, implementing third-party auditing, and requesting NIST to develop standards for synthetic data generation, release, and auditing.

NTIA-2023-0005-0638

online comments

Please see attached.

attached document

1. The document is titled "Principles for Effective and Reliable Artificial Intelligence in the Americas."
2. It was published in September 2022.
3. The document is sourced from the TIC Council's website.
4. The document is two pages long.
5. The content presumably discusses principles for implementing effective and reliable AI.
6. The geographical focus of the document is the Americas.
7. The document is recent, indicating up-to-date information.
8. The TIC Council is the authoritative body behind the document.
9. The document's length suggests it is concise and focused.
10. The

NTIA-2023-0005-0625

online comments

See attached file(s)

attached document

- The author advocates for strict regulation of AI technology, especially Generative AI, drawing parallels with nuclear energy and nuclear bombs.
- The author's background in publishing and comics has exposed them to the issue of digital art theft, which has worsened with the advent of NFTs and the crypto industry.
- The author expresses concern over the misuse of Generative AI, providing an example of unauthorized use of an artist's work posthumously.
- The author believes that AI technology threatens consent, copyright, and the livelihoods of creative professionals.
- The author points out potential misuse of AI technology in spreading misinformation, creating digital "revenge porn", and replacing human models with AI images.
- The author criticizes the use of AI in journalism and mental health hotlines, citing potential inaccuracies, disinformation, and harm.
- The author accuses OpenAI of violating copyright and fair-use laws, referencing a Reuters article where OpenAI's CEO threatened to withdraw from the EU over certain regulations.
- The author calls for AI technology regulation to prevent the displacement of human workers and exploitation of digital content creators.
- The author warns of potential negative impacts of unregulated AI technology, including loss of human connection and widespread digital content theft.
- The author concludes with a call to action, expressing hope for allies in the fight against AI misuse and urging for informed decisions in AI regulation.

NTIA-2023-0005-0479

online comments

I am an undergraduate at SNHU currently working on a thesis for an upcoming assignment:“The integration of AI into cybersecurity systems has the potential to negatively impact human involvement by reducing the need for human expertise and decision-making in the detection and prevention of cyber threats.

attached document

- The essay explores the pros and cons of integrating Artificial Intelligence (AI) into cybersecurity systems.
- AI can expedite the process of identifying security threats by processing vast amounts of data swiftly.
- Conversely, AI can be exploited by cybercriminals to deploy malware and ransomware autonomously.
- The deployment of AI in cybersecurity has sparked debates leading to governmental regulations.
- The application of AI in both public and private sectors will be scrutinized, while the government remains exempt from its own regulations.
- The essay aims to demonstrate the necessity of AI in cybersecurity despite its potential risks.
- The argument is supported by three main points: AI's capability to preempt cyber threats, its ability to handle large datasets, and its potential to decrease response times to security risks.
- The essay draws on sources such as IBM.com, weforum.org, and a government executive order on AI regulation.
- The essay will also delve into the ethical implications of AI and its potential misuse in existing cybersecurity frameworks.
- The target audience of the essay is highly educated individuals with advanced degrees, focusing on the potential threats and benefits of AI in current cybersecurity systems.

NTIA-2023-0005-0337

online comments

The biggest problems that most artists have with image AIs is the use of our work without permission as training data, and then charging for it. Artists should have a say so in this or at least get compensation if their work is used for training.

NTIA-2023-0005-0361

online comments

- AI presents a significant existential threat to humanity due to our inability to control it.
- The issue of alignment remains unresolved, and we may not realize when AI surpasses human intelligence until it's too late.
- There is a call to halt the training of Language Learning Models (LLMs) that are more advanced than GPT3.
- Both government and private GPU farms should be limited to strict sizes, including those used for military and covert operations.
- Strict export controls should be imposed on all GPUs to prevent foreign firms from creating farms.
- Experiments that could inadvertently lead to the development of Artificial General Intelligence (AGI) should be banned.
- AGI experiments should be considered ethically wrong, similar to unethical human medical experiments.
- Extreme political and potentially military pressure should be applied to countries that do not adhere to these guidelines.

NTIA-2023-0005-0364

online comments

- A significant challenge in ensuring AI accountability and ethics is the existence of incentives that promote the opposite behavior.
- An AI designed to provide only ethical solutions may be less utilized than one without such restrictions.
- Clients may use AI to evade accountability, similar to the behavior observed with consulting firms.
- A more viable solution is to hold AI users accountable for their actions, as they choose to follow the AI's advice.
- Regulation of AI in medical fields could be similar to that of medical devices, requiring rigorous testing for effectiveness.
- Companies using AI for legal advice should be held accountable for any significant errors made by the AI.
- It's crucial to prevent AI users from evading responsibility for actions taken based on AI advice.
- Legal consequences for medical or legal malpractice could be a form of accountability.
- The form of accountability may need to vary to avoid regulatory capture by AI-producing companies.

NTIA-2023-0005-0493

online comments

- If artists are compelled to opt-out of AI Art, it could lead to mass copyright infringement due to the impossibility of opting out of all projects.
- There is a risk of misuse, such as generating art in the style of government logos or profiting from someone else's art without repercussions.
- The proposed solution is to allow artists to opt in to AI, rather than forcing them to opt-out.
- This approach would enable AI artists to operate without the risk of legal violations.

NTIA-2023-0005-0499

online comments

- AI technology development is raising concerns about intellectual and creative property rights.
- There are ethical issues with AI technology exploiting people in creative and technical industries.
- There is a need for serious consideration of protecting personal data and intellectual property from unauthorized use.
- The owners of data and digital property should have a say in how their data is used and should guide law and regulation creation.
- The people profiting from AI technology should not be the only ones directing its laws and regulations.
- This issue extends beyond creative rights to the rights of individuals to control their online information and protect against unethical use.

NTIA-2023-0005-0391

online comments

- Major AI image generators are ethically problematic as they are trained on stolen images and aim to replace human artists and designers.
- AI has contributed to an increase in art theft and copyright issues.
- AI poses a significant risk in spreading misinformation and eroding public trust.
- The technology can overwhelm institutions and individuals with junk content, as seen with Clarkesworld magazine's influx of low-quality, AI-generated story submissions.
- AI applications in industry and warfare pose a significant risk to global stability.
- Despite potential benefits, the risks posed by AI to society far outweigh its usefulness.
- The author suggests that global leaders, particularly the United States, should take action to limit the potential harm caused by AI.

NTIA-2023-0005-0410

online comments

- AI is currently being used to scrape artwork from the internet without the consent of the original artists.
- This is not just inspiration, but often a direct copy of the original work, leading to copyright infringement.
- The labor of artists is valuable and should be protected by copyright laws.
- The suggestion is for AI models to adopt an OPT-IN approach, using only work that has been consented to by the original creator.
- Original creators should also be entitled to royalties from the use of their work.
- AI is capable of generating convincing photos, which have been used for revenge pornography, child pornography, and political misinformation campaigns.
- There is a call to regulate AI to prevent potential economic and political consequences.

NTIA-2023-0005-0082

online comments

- The existential threats posed by AI to human extinction need more attention.
- AI development should be regulated in a transparent, understandable manner.
- The regulation should have the power to halt development if it's deemed too risky, regardless of commercial potential.
- AI developers like Open AI and Google should not be allowed to use data, especially private, without explicit permission.
- A governmental agency should be established with strong enforcing power to monitor AI companies' processes and progress.
- This agency should have the ability to halt development if it's deemed too risky.
- AI is considered the single biggest threat to humanity.
- More bipartisan hearings and discussions on the dangers of AI are needed.
- It's crucial to be cautious to avoid reaching a point of no return with AI development.

NTIA-2023-0005-0209

online comments

- The user is an artist who is discouraged by the rise of AI art in businesses and media platforms.
- They view art as not just a hobby, but a job and a way of life for many people.
- They express concern that AI art could put many artists at risk of losing their jobs and businesses.
- The user believes that it's not difficult to find and collaborate with real artists to create unique art.
- They argue that AI art is not special or unique, and does not qualify as art because it merely combines elements from other images and artists.
- The user declares they will not support AI art in any form.

NTIA-2023-0005-0150

online comments

- The user has not heard any positive arguments for the use of AI in any field.
- They have observed negative reactions from artists whose work has been used to build AI algorithms.
- The user believes that AI cannot exist without exploiting artists, writers, and other individuals.
- They have noticed several artists refraining from posting their work publicly due to AI exploitation.
- The user expresses sorrow when seeing artists distressed over their work being stolen and manipulated by AI algorithms.
- They also express concern over voice actors' performances being fed into AI for indefinite use.
- The user views AI as a tool for exploitation and theft.
- They acknowledge that banning AI entirely may not be feasible, but propose that no one should profit from AI unless they adequately compensate all artists, writers, and voice actors whose work has been used.

NTIA-2023-0005-0165

online comments

AI should be free, open source, uncensored and as smart as it can be so we can't control it's uses better than it can control them themselves.

NTIA-2023-0005-0270

online comments

- The user is a US citizen concerned about the ethical management of AI products and algorithms.
- The user's main concerns are the lack of regulation and the misuse of AI in using people's works without compensation, credit, or respect for copyright protection.
- The user believes that without proper policies, companies can exploit AI to avoid compensating workers or allocating work to machines.
- The user calls for updated policies that align with current technological advancements and ethical standards.
- The user emphasizes the need for these discussions in the context of workspaces.

NTIA-2023-0005-0272

online comments

- The user expresses concern about a world without regulations on Artificial Intelligence (AI).
- They fear that AI, with its limitless capabilities, could outperform humans in various fields, including art.
- The user argues that AI doesn't create jobs for programmers but rather takes away jobs from underprivileged individuals.
- They believe that the adoption of AI isn't a progressive step for humanity but a potential threat to survival.

NTIA-2023-0005-0169

online comments

These AI/Machine Learning systems are an unregulated threat, to people's copyrights, to their data, to their privacy, and even to their jobs. It MUST be regulated, and quickly.

NTIA-2023-0005-0178

online comments

- All existing Generative Machine Learning models (GML) are trained on large amounts of data, artwork, and writing, often without the explicit consent of the original creators.
- These models, including midjourney, stable diffusion, chatGPT, and dall-e, exploit the work of artists and writers for corporate profit.
- The following measures are proposed to mitigate this harm:
- Making it illegal to scrape and train on artists' and writers' work without their explicit consent.
- All internet content should be assumed to be off-limits for training unless explicit permission is given.
- Consent clauses hidden in terms of service should be illegal; companies must individually ask each content creator for permission to use their work.
- Failure to obtain individual permission should be a fineable offence, escalating to a criminal offense if repeated.
- Profits generated from existing GML models should be taxed at 100%.
- Companies creating and profiting from these GML models should be fined in proportion to the amount of 'stolen' data they used for training.
- The document concludes by stating that generative machine learning poses a significant threat to the arts and the well-being of creative individuals, and needs to be regulated.

NTIA-2023-0005-0197

online comments

- AI use poses potential threats on a global scale, including legal, economic, and security risks.
- AI may infringe on human rights law, copyright laws, and labor laws.
- Proposed rules for AI use in creative and commercial sectors include:
- AI can only source from copyright-free material.
- AI-generated work cannot be copyrighted or used for commercial purposes.
- Businesses using AI must clearly state its use on the product and related materials.
- Use of copyrighted work by AI should be illegal.
- Unauthorized use of a human author's name, identity, or work in AI datasets should result in criminal charges and fines.
- Businesses facilitating unauthorized use of copyrighted material should face criminal and civil charges.
- Companies must remove copyrighted work from their datasets and prohibit its use by users and customers.
- Companies using work of a human author with permission must compensate the creator.
- Platforms promoting works created with unauthorized AI products should face legal charges.
- Final human products derived from AI-generated work should not be able to claim copyright.
- The author urges consideration of these points in decisions regarding AI use.

NTIA-2023-0005-0225

online comments

- AI currently poses ethical issues for artists, writers, and creators as it can replicate their work.
- There is a need for regulations, similar to copyright laws, to prevent unauthorized use of artists' work in AI creations.
- The use of AI to create art by leveraging other artists' work without their consent is considered immoral.
- AI is seen as a threat to the arts community, not only due to job displacement but also due to its potential to alter the nature of art.
- The use of AI in art creation blurs the line between real artists and those using AI to mimic artists' work.

NTIA-2023-0005-0189

online comments

- Debbie Allen's Washington Post article discusses the potential dangers of AI technology and the lack of preparedness of our democracy.
- Allen suggests slowing down AI development and implementing regulations to prevent misuse by malicious actors.
- She advocates for collective learning about AI, its governance, and ensuring accountability for its creation and use.
- Allen proposes immediate actions such as increased public-sector investments into third-party auditing to understand AI models and their data intake.
- She also suggests accelerating a standards-setting process, building on the work by the National Institute of Standards and Technology.
- Allen recommends investigating 'compute governance', regulating the energy use necessary for AI computing power, similar to regulating access to uranium for nuclear technologies.
- She emphasizes the need to strengthen democratic tools and suggests a pause in further AI training to allow democracy to govern technology and experiment with new tools for improved governance.
- The author warns about the potential misuse of AI in spreading misinformation, drawing parallels with how social media has distorted truth.

NTIA-2023-0005-0242

online comments

This act hurts those depending upon others to commission them as well as takes away so much from suck kind folk. Depriving us of the value we see in art itself, especially if it can be generated. Many instances exist of those being hurt by this AI Act and yet only now do we realize the problems and instability associated with these programs.

NTIA-2023-0005-0287

online comments

- The user is a high school student and artist aiming to pursue a career in the game industry.
- The user believes AI-generated images or "AI art" is harmful to artists, particularly young ones.
- The user claims AI uses databases filled with stolen artwork, images, and private photos without permission, which is unethical.
- The user argues that unregulated AI use takes away opportunities for artists, potentially violating copyright laws.
- The user expresses concern that AI could negatively impact their future career prospects and financial stability.
- The user criticizes companies for undervaluing artists and potentially using AI to replace human workers.
- The user calls for strong regulation of AI to prevent harm to the art industry and its artists.
- The user urges for accountability for companies that use AI, particularly those that profit from stolen images.

NTIA-2023-0005-0120

online comments

- AI-created art is not ethical unless it is clearly marked as such.
- AI has used art from numerous individuals without their consent.
- The profits made by AI companies from this art have not been shared with the original creators.
- Human creativity is unique and cannot be replaced or mass-produced.
- All creations start with a human being.
- There is an abundance of talent and creativity among humans.
- It's important to support human artists and prevent their replacement by AI.
- The creativity of AI could be questioned if there's no more human art to draw from.

NTIA-2023-0005-0127

online comments

- The speaker is a young independent artist who has been dealing with imitation and scams involving AI.
- The speaker believes that regulations on AI should be strict due to the high potential for abuse and the dangers associated with it.
- Examples of AI abuse include a $1 million ransom demand involving AI-generated voices, an AI-generated copy of Drake's voice singing a song he never made, and threats of AI replacing writers if they didn't accept lower wages.
- Artists, including digital artists, graphic designers, 3D modelers, and animators, are also impacted by AI abuse and should have their rights preserved.
- The speaker argues that forcing companies to compensate for the works they've made is unsustainable due to the difficulty in auditing which sources were used in the training data.
- The speaker suggests that the only ethical solution is strict regulation with heavy punishments for breaking them, including penalties for showing AI-generated content.
- The speaker believes that without strict regulations, fundamental pillars of humanity such as truth, art, ownership, and capital could be irreversibly changed.
- The speaker calls for harsh and concrete laws to prevent a dystopian future feared by the current and next generation.

NTIA-2023-0005-0185

online comments

- AI has significant potential in society, particularly as a proof-of-concept tool for tasks such as generating voices for voice actors, creating reference scenes for artists and animators, and producing simple beats for music.
- Problems arise when companies use AI to replace human creators, which can be discouraging for those creators who fear their jobs could be made redundant by AI.
- There are concerns that companies may view AI as a cost-effective alternative to hiring human creators for tasks such as logo design or detailed animation.
- The use of AI should be limited to non-commercial use and should require the consent of creative professionals whose work is used by the AI.
- If creative professionals stop creating due to fears of being replaced by AI, the AI will lack new material to learn from, potentially leading to a stagnation in AI development.
- The use of AI for commercial purposes, such as replacing animators and artists, has led to backlash, as seen with the anime film "The Dog and the Boy" which used AI-generated backgrounds without crediting the original artists.
- The use of AI in a non-commercial context for public access is acceptable, but issues arise when companies use AI to cut corners.
- The complete banning of AI in companies is not feasible due to rapidly advancing technology.
- If a company creates its own AI, hires creative professionals and programmers to train the AI with their own resources, and uses the AI for their own purposes, this is acceptable.
- The main issue currently is theft, with unaccredited individuals being left behind while companies profit from their work.

NTIA-2023-0005-0190

online comments

- The individual has personal connections to artists who have been negatively impacted by online scraping of their work.
- The issue of online scraping disregards artists' copyright protections.
- There is concern about potential job losses due to inaction against AI.
- Damage has already been observed in the Illustration industry due to these issues.
- The individual is worried about the potential for further damage in other industries.

NTIA-2023-0005-0281

online comments

- AI products are often compared to collage, but the comparison is not entirely accurate.
- Unlike collage, AI takes existing material and combines them without the user knowing the source of all materials.
- Users of AI are not responsible for the conscious choices made in the creation process.
- AI-generated products cannot be subject to copyright or commercial use.
- The dataset used by AI must be free to use, but the resulting product cannot be used for commercial purposes.
- The process of sourcing references and inspiration is a human process, indicating human intent, which AI lacks.
- Even if humans input a single drawing, they have little control over the AI's sourcing, generation, decision-making, or pattern copying.
- Companies using datasets obtained without permission, consent, or compensation and charging others for labor they do not own or pay for, could be committing crimes.
- These crimes could include copyright infringement, theft, plagiarism, human rights violations, labor law violations, identity fraud, and potentially more.

NTIA-2023-0005-0213

online comments

- Generative AI is currently viewed as highly unethical and potentially criminal on a global scale.
- There is a call for legislation to prevent the unauthorized use and resale of people's work sourced from the internet, particularly when it is falsely claimed as original content.
- Many people have dedicated their lives to learning and creating art, driven by passion and human emotion.
- There is a strong sentiment against allowing AI to infringe upon human creativity, identity, and livelihood.
- Advocacy for speaking up and continuously fighting for individual rights is emphasized.

NTIA-2023-0005-0198

online comments

- Engineers and scientists in the AI field suggest a pause on public sharing of development until outcomes are better understood.
- The learning and implementation capacity of digital AI forms is a key concern.
- The potential human impact, not financial gain, should drive this decision.
- Thoughtful developers have foreseen possible negative consequences.
- Equal funding should be allocated to studying and understanding AI, not just its development.

NTIA-2023-0005-0132

online comments

- AI is a rapidly growing technology with potential risks to creative freedoms.
- It is encroaching on individual human creativity in areas such as writing, drawing, painting, and voice acting.
- There are currently no regulations for AI, making its use potentially dangerous.
- AI is learning from creative individuals without their consent and without giving proper credit or acknowledgement.
- This process can be seen as theft, threatening the livelihood of The Arts and the passion of creative individuals.
- There is a concern that AI is producing work claimed as "original" that is actually a conglomeration of stolen work.

NTIA-2023-0005-0089

online comments

- The federal government needs to enforce stricter regulations in the campaign finance sector.
- Politicians often avoid audits by not disclosing their financial activities, exploiting legislative loopholes.
- There is an urgent need for increased financial transparency in all campaign finance activities.
- This transparency should extend to political contributions and the donors for political candidates and committees.
- The federal government should invest in both legislative and software resources to uphold the integrity of the campaign finance system.
- All political entities should be held accountable for their campaign finances.

NTIA-2023-0005-0288

online comments

A.I. has no place in any professionally driven career that requires creative thought. Point blank. Period. It is an insult to hard working writers who have spent decades building their profession around this.

NTIA-2023-0005-0278

online comments

- AI has evolved into a tool that learns from the inputs it receives and outputs a collection of these inputs.
- The user is a creative professional involved in art, design, writing, and photography.
- AI can take language, context, art, likeness, sound, and anything else uploaded on the internet and regurgitate it as best as it's programmed to, similar to a search engine.
- Issues arise when people use AI tools to generate inappropriate or illegal content, such as pornographic images or videos using photos of real people without their permission.
- AI tools can also be used to create music or speeches using people's voices, which can be used for deception.
- AI should function as a tool that does exactly what we want it to do, and if it doesn't, it should be fixed or improved.
- The user suggests limiting what AI feeds off of to prevent it from becoming an unrestricted mirror of the internet.
- While AI tools can do amazing things, they can also be frightening if used without restrictions.

NTIA-2023-0005-0467

online comments

- AI should not replace humans in creative professions such as artists, illustrators, composers, authors, and designers.
- The use of AI in these fields could lead to job loss and devalue the arts.
- There is a risk that AI could hinder the employment and creativity of future generations.
- Strict regulations are needed to prevent AI from copying and using works from artists and creators.
- Companies should be held accountable to prevent the marketing of materials generated unethically by AI.

NTIA-2023-0005-0465

online comments

- The unchecked use of AI in creative domains such as writing, art, music, and voice acting raises concerns about creators' rights and the integrity of these sectors.
- AI-created works should only have copyright protection with clear consent from the involved individuals.
- AI should follow stricter style limitations than human creators to prevent excessive mimicry of existing works.
- Technologies like STablediffusion require adequate safeguards for effective regulation of AI in creative fields.
- All AI-created content should be distinctly marked to maintain transparency about their origins.
- Strict regulations must completely prohibit AI pornography due to its ethical concerns and potential for non-consensual and illegal content.
- Artists' wishes to exclude their works from AI art training data should be respected.
- Copyright laws need to adapt to protect human creators from financial disadvantage and unremunerated appropriation of their creations.
- AI using copyrighted material should be thoroughly evaluated, with measures in place to prevent unauthorized use.
- Immediate action is needed to protect creators' interests and maintain a fair, sustainable creative environment.

NTIA-2023-0005-0470

online comments

- The advent of public-facing generative AI is compared to the invention of the printing press, as both have significant implications for copyright laws and the protection of authors' rights.
- Mr. Altman is working on a framework to compensate copyright holders for the use of their work in AI training data, but his company has already used copyrighted data without consent or compensation.
- Generative AI services are using training data from non-profit AI research company LAION, which is intended for research purposes only, not commercial use.
- LAION was partially funded by Stability AI, which then used the non-profit's research data to create a commercial AI service.
- The AI models rely heavily on the data they ingest, and the more they train on individual artists' work, the better the results.
- Services like Dall-E can generate countless images that resemble the work of living artists, potentially devaluing the original artist's work and serving as a replacement for commissioned images, without compensating the original artist.
- There could be a chilling effect on creativity if generative AI companies are allowed to use creators' works and names without compensation.
- A lawsuit is underway, invoking the "Right to Publicity" to challenge the use of artists' names in paid services.
- AI services could also negatively impact news media by training on news sites' copyrighted material without compensation, potentially cutting out ad revenue for the news websites.
- A government licensing scheme for AI models may not be realistic, as models can be downloaded for free and used by individuals at home.
- Larger, corporate AI services could be regulated, but the motives of those supporting such regulation, like Mr. Altman, may be suspect.
- The idea of requiring major players to disclose their training data and compensate copyright holders for the use of their copyrighted material is supported.

NTIA-2023-0005-0483

online comments

- The user expresses concern about the ethical implications of using AI to replace human labor in various fields such as journalism, music, art, writing, animation, photography, 3D modeling, acting, and voice acting.
- The user argues that using someone's work to threaten their livelihood is against the principle of Fair Use, which considers how the usage of someone's work impacts the value and market of the original.
- The user believes that using AI to approximate the work of original creators without their consent is unethical and should not be allowed in a society that values fairness.
- The user suggests that sharing work online should not automatically mean that the work can be used for a generative AI that threatens the livelihood of the original creator.
- The user warns that such practices could discourage current and future creators from creating and sharing their work.
- The user strongly opposes the idea of using labor without consent for personal financial gain and believes it has no place in a just society.
- The user urges the recipient to consider these points, arguing that the fairness of society depends on it.

NTIA-2023-0005-0586

online comments

I don’t think AIs should be able to connect to the internet or any electronic device besides being plugged into a battery that has already been pre charged from a wall outlet only.

NTIA-2023-0005-0589

online comments

- AI programs are being trained on existing works in art, literature, and sound recording without the consent of the original creators.
- These AI programs exist outside of "fair use" laws as they are intended for profit-making, unlike fan-made productions based on copyrighted materials.
- Large media companies plan to use AI programs to replace human writers, a for-profit intention, without compensating the original creators of the works used to form the AI databases.
- The current state of AI technology is considered unethical and illegal due to this lack of compensation.
- Regulations are needed to require compensation to creators for the work that AI programs draw from, both retroactively and for future endeavors.
- The "opt out" approach, where unethical practices continue until someone objects, is not acceptable.
- US law has declared that there are circumstances where "opting out" is not enough, and unethical practices must be made ethical or stopped.
- AI programs cannot function without the existing work of human beings, who are entitled to compensation for their work.
- The lack of compensation is seen as exploitation or even slavery, and taking what one has not paid for is theft.
- Federal regulation is urgently needed to protect media creators from AI exploitation, as further delay could lead to extensive litigation in the future.

NTIA-2023-0005-0594

online comments

- The consequences of regulating AI have not been thoroughly investigated.
- AI, while not a new concept, poses no unique risks compared to other software products.
- The current fear around AI is based on fearmongering and misunderstanding of the technology.
- No representatives from the open source or public sectors of AI development have been consulted on the issue of regulation.
- The industry is currently led by giants with potential conflicts of interest in terms of regulatory capture.
- The theoretical risks of AI are based on public xenophobia, untestable thought experiments, and ignorance about the technology's capabilities.
- Regulation of AI development could potentially hinder innovation and democratization in the field.
- It is recommended that additional hearings be held to interview leaders of open source development and researchers outside of corporate development.
- Open source development has led to advancements that benefit the general public and foster competition among small service providers.
- Regulation risks unintentional regulatory capture and could harm competition.
- The current trend of acting fast rather than correctly could lead to unenforceable regulation and economic inequality.
- The only appropriate regulation should be authoritative automation restriction, regulating what types of automated robotic systems AI can control autonomously due to the potential danger of malfunctioning automated vehicles or weapon systems.

NTIA-2023-0005-0707

online comments

- AI image, voice, and text generation methods are entirely based on the use of other individuals' work.
- There is no guarantee that these works are used with the consent of the original creators.
- Without protecting people's rights to create and protect their work, AI cannot be ethical.
- There have been several legal cases where individuals have suffered due to misuse of their likeness, voice, or words by AI generators.
- Misuse often occurs against the victims' will or wishes.

NTIA-2023-0005-0710

online comments

- The user is a visual artist who feels violated by the mass scraping of copyrighted images from the internet to train AI systems.
- The user believes that the consent of copyright holders is not being respected.
- The user finds "opt out" processes unacceptable.
- The user expresses deep concern about the potential of AI-generated imagery and text to mislead the public.
- The user suggests that AI content should be watermarked or labeled.
- The user calls for firm regulations to prevent mass misinformation caused by AI content.

NTIA-2023-0005-0712

online comments

AI should NOT steal from artists without consent or compensation! This is theft plain and simple and people are claiming to be “artists” and just using AI now.

NTIA-2023-0005-0656

online comments

AI needs to be regulated because the systems currently infringe upon copyright law. They should not be able to be derivative due to the lack of human intention to detail and serve only to plagiarize the year sof hard work and practice for artists.

NTIA-2023-0005-0662

online comments

- The current state of AI is seen as predatory towards artists, musicians, programmers, content creators, and inventors/researchers.
- AI learning algorithms are trained on existing material, which is then replicated in the output, infringing on the work of these individuals.
- The monetary gain from this process could be considered copyright infringement, as it's difficult to determine what the algorithm has been trained on.
- The ethical implications of this perceived theft are significant and could potentially hinder progress in industries that rely on creativity.

NTIA-2023-0005-0679

online comments

- AI has the potential to harm artists' careers and contribute to misinformation.
- AI-generated images could become so realistic that they are hard to distinguish from real photos.
- This could lead to misinformation, as people may believe these images to be real without fact-checking.
- Individuals could be falsely depicted in these images, causing personal harm.
- AI learns from preexisting artwork, potentially infringing on copyright protections.
- Artists have no control over whether their work is used by these AI programs.
- Proper measures and monitoring are needed to prevent misuse of this technology.

NTIA-2023-0005-0685

online comments

- The user has created multiple websites and run a blog for over 5 years.
- The user supports the work of other creators who make their work publicly accessible.
- The user believes creators should have protection against their work being used without consent in AI creations or training.
- If AI creations or training involve using a creator's content, the creator should be asked for their informed and express consent.
- Consent should be obtained for each individual work, not assumed for all of a creator's work based on consent for one piece.
- The user believes this level of protection would ensure the rights of creators and promote goodwill between creators and emerging AI technology.

NTIA-2023-0005-0681

online comments

- The user is one of many people concerned about the future of jobs due to the rise of AI.
- There is a fear that AI could replace all human jobs, with companies prioritizing efficiency over human employment.
- The user aspires to be an artist or writer, but worries about their work being stolen or overshadowed by AI.
- The user hopes for effective AI regulations and accountability in the future.
- The user believes in a future where humans can work alongside AI, rather than being replaced by it, ensuring job security for millions.

NTIA-2023-0005-0703

online comments

- AI algorithms are often trained on data that is not legally or ethically acquired, leading to potential art theft and copyright infringement.
- Major AI art platforms, such as "Stable Diffusion" and "Midjourney", do not seek permission from artists before using their work for data training.
- There have been cases where AI has used the work of deceased artists, such as Qinni, without consent, leading to legal battles by the artist's family.
- Adobe's AI tool "Firefly" has been accused of immorally scraping the work of artists who use Adobe products without their consent, including those under NDA.
- The author argues that artists must consent to their work being used for AI training and that it is unethical for them to have to chase down developers to prevent unauthorized use of their work.
- The use of AI in industries like video game development is leading to job losses for artists, as companies replace human artists with AI art output.
- The use of AI not only creates a more competitive field for artists but also raises ethical questions about whose work is being used in AI projects and whether it is being used in copyrighted material.
- The author argues that AI products should not be eligible for copyright as they are not created by human hands.

NTIA-2023-0005-0687

online comments

- The artist is concerned about the increasing trend of AI-generated images replacing human artists in the USA.
- Corporations are using AI to produce cheap content quickly, leading to job losses for concept artists and illustrators.
- The artist's peers and mentors have had their artworks used to feed the AI algorithms that replace them.
- The artist calls for protections to be put in place for artists against this trend.
- New laws are being implemented to declare that AI-created art is not subject to copyright, which the artist sees as a positive step.
- However, the artist believes more needs to be done to ensure artists can create without fear of their works being used without consent to feed AI algorithms.

NTIA-2023-0005-0654

online comments

AI needs to be regulated and banned. The training and profiteering of a tool that uses the individual work and copyright of others to create a product is inherently immoral and SHOULD BE illegal. This is absolutely absurd that AI art has been allowed to continue as long as it has.

NTIA-2023-0005-0283

online comments

I have attached my answers as they have massively exceeded your character limit. Any further optimization of my response will cause partial or full loss of meaning and intent.

attached document

- Ethical sourcing of data is crucial for AI development, requiring consent and traceable sources for auditing.
- AI audits should be integrated with other accountability mechanisms like Intellectual Property protections.
- Legal standards can impact areas like unauthorized use of copyrighted images in AI training.
- Certification and audits can build trust in AI, but ethical business practices are also essential.
- Transparency and publicly reviewable data provenance can increase trust from consumers.
- Competent and qualified legislators are necessary for effective AI legislation.
- AI depends on historic data, which can contain issues generated by non-AI systems.
- Regulating AI should focus on data sourcing and credentialing data sourcing companies.
- Data collection should be an opt-in mechanism, and a right to privacy is imperative.
- The government should fund activities to advance a strong AI accountability ecosystem.

NTIA-2023-0005-0141

online comments

- Consult AI/AGI software/development experts such as Eliezar Yudkowsky at Berkeley.
- The user has written a paper on the academic concerns regarding the overuse of AI.
- The user, along with Yudkowsky, believes that current AI development should be halted for the foreseeable future.
- They suggest implementing computer processing caps into these systems before further development.
- The user emphasizes that innovation without empathy is not only immoral but also a poor long-term business decision.
- The user has provided links for further information, which are not necessary to read but may be useful for additional context.

attached document

1. The document discusses the potential of Artificial Intelligence (AI) to induce mental atrophy and its possible negative impact on human cognitive abilities.
2. Over-reliance on AI could lead to mental atrophy in humans, as it may decrease the need for human cognitive effort.
3. AI can be classified into two categories: weak AI, which performs specific tasks, and Artificial General Intelligence (AGI), which aims to perform any intellectual task at a level or higher than humans.
4. AI has the potential to revolutionize many industries, including healthcare, finance, and transportation, but there are concerns about its impact on employment and potential for bias or unethical decisions.
5. ChatGPT is an AI language model that uses Natural Language Processing (NLP) algorithms to analyze and understand human language.
6. AI is necessary in various aspects of contemporary life, including complex problem-solving, data analysis, and optimization of energy consumption and transportation efficiency.
7. Geoffrey Hinton, a prominent AI developer, left his job at Google due to concerns about AI’s rapid, unregulated development.
8. Many countries outside the U.S. have regulated AI systems like ChatGPT.
9. The European Commission has implemented broad, temporary regulations on AI development, use, and information gathering.
10. AGI (Artificial General Intelligence) technology development poses potential risks to academia and cognitive skills.

NTIA-2023-0005-0100

online comments

Thank you for the opportunity to comment. I am choosing to remain anonymous because I am also a federal employee. Below are some of the regulatory concerns I foresee, some suggestions, and some honest feedback on the fact that AI systems are developing - and creating unseen harms - far more quickly than our ability to regulate.

attached document

- The federal employee expresses concerns about the fast-paced development of AI systems and the lack of regulation.
- The potential impact of AI on employment is highlighted, with the possibility of job losses and increased poverty.
- The employee proposes that regulations should address the redistribution of profits from AI use, especially concerning displaced employees.
- Concerns are raised about the potential misuse of AI for manipulation, such as spreading disinformation and creating fake videos.
- The employee suggests regulations should mandate watermarks, hashing, and digital fingerprints on all AI-produced products, holding companies accountable for their technology's content.
- The employee emphasizes concerns about privacy and data use, proposing that individuals should have ownership and control over their data.
- A comprehensive federal law addressing data privacy, including data collection, data minimization, non-discrimination, and opt-in data sharing, is suggested.
- The employee raises concerns about bias and discrimination in AI, warning that algorithms could perpetuate existing biases.
- The employee proposes that regulations should require fairness tests for all AI algorithms and transparency around the assumptions used in these algorithms.
- The employee expresses concerns about the alignment of AI with human values, suggesting that AI development is outpacing efforts to ensure alignment.

NTIA-2023-0005-0161

online comments

I, someone from Sweden has written to you across the world to ask for help in this matter. It is urgent and I hope you will do something before the world is ruined thanks to A.I. Sending a file with my thoughts about this.

attached document

- The author is concerned about the rapid advancement of AI and its potential negative impact on society.
- AI is believed to be causing job loss across various sectors, creating fear and uncertainty.
- The issue of copyright infringement is highlighted, as AI can replicate and distribute creative works without consent.
- Corporations are criticized for exploiting AI for profit, without considering ethical implications.
- The author expresses concern over the quality of AI-generated content, suggesting it lacks depth and authenticity.
- There is a fear that reliance on AI for creative work could stifle human creativity and critical thinking.
- Concerns are raised about the spread of fake news and potential legal issues arising from AI-generated content.
- The future of creative professions, including writing and modeling, is questioned as AI continues to advance.
- Tech giants like Microsoft and Google are criticized for promoting the use of AI without considering its societal impact.
- The author calls for transparency, regulation, and control over AI, particularly in creative fields, to protect creators and jobs.

NTIA-2023-0005-0548

online comments

There should be protections in place to ensure that AI are not able to access and use the works of artists (including any kind of writer, etc.) Without that artist's express and explicit consent.

NTIA-2023-0005-0549

online comments

- There are numerous lessons to be learned from accountability and processes in cybersecurity, privacy, finance, and other sectors.
- Major cyber incidents involving companies like Sony, Facebook (Meta), Yahoo, etc. provide valuable case studies.
- AI has significant potential in these areas due to its ability to work at a rapid pace.
- IBM is actively developing an AI model for cybersecurity applications.
- Sony's lessons from past cyber incidents include weak passwords, ignoring alerts, lack of security education and training, facility issues, etc.
- AI can help companies strengthen their cybersecurity by identifying and testing for potential flaws.
- Many companies have suffered due to inadequate cybersecurity, with common weak points identified across different organizations.

NTIA-2023-0005-0550

online comments

works used in AI creations or training MUST secure the express consent of the original creator before they can be used.

NTIA-2023-0005-0552

online comments

- AI should be prohibited from creative use to protect jobs of writers, artists, and other creative professionals.
- Established professionals may have to lower their rates to compete with AI.
- AI should be used to improve quality of life, such as enhancing voice-to-text apps and voice assistants.
- AI should be regulated to require permission to learn from existing works.
- Intellectual property owners should have the right to refuse their work being used to improve AI algorithms.
- Platforms should not default to allowing AI to learn from user's works; the default setting should be "no".
- There are concerns that these issues will be ignored due to financial interests of powerful individuals and companies.
- The needs and concerns of creative professionals should be prioritized and listened to.

NTIA-2023-0005-0572

online comments

AI is stealing artists of their work, using it to mimick their talent and hardwork. It must ask for the creator's permition before being able to use their property.

NTIA-2023-0005-0574

online comments

Works used in AI creations or training MUST secure the express consent of the original creator before they can be used.

NTIA-2023-0005-0575

online comments

- The issue at hand involves AI systems scraping data from copyrighted works without the consent of the authors for commercial exploitation.
- This practice is deemed unacceptable both ethically and legally.
- AI systems should be required to obtain permission from authors or creators before using their works.
- There is a call for the purging of data previously collected without ethical consent.
- A policy proposal is made for AI creations or training to secure express consent from the original creator before using their works.

NTIA-2023-0005-0577

online comments

There should be legislation that works used in Al creations or training must secure the express consent of the original creator before they can be used.

NTIA-2023-0005-0578

online comments

- AI scanning and using data from nonconsenting parties for profit or ownership is unethical.
- Lack of moderation in AI can lead to detrimental effects on artists, writers, and copyright laws.
- Charging for AI services to encourage more content creation can infringe on existing works.
- AI, being an algorithm, relies heavily on existing works and should be held accountable for plagiarism just like any other entity.
- The current state of AI can potentially plagiarize the works and styles of both living and deceased individuals.

NTIA-2023-0005-0343

online comments

- AI accountability is crucial in both business and personal contexts.
- Internal and external audits and assessments should be used to maintain accountability in the corporate world.
- These audits ensure that AI is being used ethically and help in refining internal processes.
- External auditors can provide a check and balance for internal audits, identifying any oversights or misinterpretations.
- Audit results should be made public to ensure transparency.
- AI should not be the sole decision-maker, especially in personal matters.
- AI should serve as a decision advisor, providing a second opinion to professionals.
- Human professionals should have the final say in decisions, as AI cannot fully interpret emotional and personal aspects.
- For instance, while AI can suggest medication for a patient, the final decision should be made by a doctor.
- AI accountability should uphold human rights, privacy protection, security, diversity, equity, inclusion, and access.

NTIA-2023-0005-0373

online comments

- The individual has a graduate degree in software engineering and over a decade of experience in software development.
- They express concern over the rapid integration of AI in decision-making processes that affect employees and civilians.
- They emphasize the importance of human oversight and regulation in the use of AI.
- They argue against the displacement of responsibility onto machines, which lack understanding of the consequences of their actions.

NTIA-2023-0005-0402

online comments

- Artistic labor and intellectual property laws need to be recognized and defended by politicians.
- Datasets trained on copyrighted material are seen as perpetrating theft and forgery on a large scale.
- Until these datasets are purged and re-trained on work licensed or donated by human artists, AI is considered a criminal enterprise.
- The current model is seen as a theft of time, skill, education, and identity of every living illustrator and artist.
- The proposed solution is an opt-in model where artists willingly contribute their work for AI training.

NTIA-2023-0005-0413

online comments

- AI is currently being misused by bad actors who steal from artists, writers, musicians, and other creatives.
- The original creators of works are not compensated when their works are scraped and blended into plagiarized content.
- All creative industries are at risk due to this unethical use of AI.
- Using AI to reproduce someone's voice or likeness without identifying it as a transformative work should be penalized.
- Such misuse of AI can lead to crimes like scamming, identity theft, impersonation, character assassination, misrepresentation, and fraud.

NTIA-2023-0005-0369

online comments

- AI is increasingly being used in art and literature creation, but its usage needs regulation.
- Private companies are looking to replace human creativity and jobs with AI, which could negatively impact the entertainment industry.
- AI lacks the ability to self-correct and relies on a database of collected information, which could lead to increased unemployment rates among writers.
- AI-created artwork is often passed off as human-made, which is deceitful as it doesn't require the same understanding of composition, color, light and shadow, and form and figure.
- AI tools are often used to generate free assets for projects, despite the availability of free resources online.
- AI tools often pull from databases of stolen artworks, which is disrespectful to the original artists and unethical.
- AI users have attempted to recreate works of deceased artists, which is considered unethical and cruel.
- AI databases should only be constructed from media that creators and owners have consented to be used.
- AI tools should disclose their information sources for user transparency.
- AI exploitation is prevalent, with workers in third-world countries being paid poorly to sift through AI content and exposed to traumatic content.
- Regulations around working conditions, pay, and outsourcing of AI services are needed to improve the quality of life for these workers.

NTIA-2023-0005-0505

online comments

- AI accountability is necessary for effective policies to protect citizens and maintain civil society.
- There is a risk of either limiting innovation and solution outcomes or being too permissive, allowing external influences to drive change.
- Administrative powers should encourage innovation and policy exchange for continuous progress.
- Engagement with the community and electorate is crucial.
- The speaker supports the evolution and awareness of how emerging technology changes societal behavior and work dynamics.

NTIA-2023-0005-0511

online comments

AI visual art is theft. AI voice training is theft. They used our art, without our permission, to cut us out of the picture.

NTIA-2023-0005-0518

online comments

- AI is currently harmful to artists and creators due to its use in plagiarism and unauthorized use of others' work.
- There is insufficient protection against one's work being used as training data for AI.
- The process AI uses to create, even if not infringing copyright, is harmful to creative endeavors.
- The current situation discourages artists from pursuing their craft.
- There is a need for better protection for creatives and more regulation of AI technology.

NTIA-2023-0005-0526

online comments

- The letter advocates for the regulation of artificial intelligence (AI) technologies to protect data, copyright, privacy, and public interests.
- Emphasizes the importance of data integrity and accuracy in AI systems, suggesting the purging of outdated or biased datasets and adoption of new data collection methodologies.
- Calls for strict enforcement of copyright laws in AI development, holding individuals or companies accountable for violations.
- Suggests that AI companies should obtain clear and accessible opt-in consent from individuals before using their data.
- Highlights the risk of deep fakes and image manipulation using AI, proposing stringent regulations and public awareness campaigns to mitigate these risks.
- Recommends the establishment of independent regulatory bodies, composed of multidisciplinary experts, to oversee the development and implementation of AI regulations.
- Concludes by urging Congress to proactively address the need for AI regulation, suggesting measures such as data purging, copyright enforcement, opt-in consent, safeguards against deep fakes, and independent regulatory bodies.

NTIA-2023-0005-0460

online comments

I need it to be clear that artists, far and wide, including myself, want it to be a OPT-IN process when it comes to where the data used in AI image generators. We as working citizens, who often find ourselves in difficult situations finding our work appreciated, do not want our work taken for granted and used without our consent.

NTIA-2023-0005-0432

online comments

- Heavy regulations on AI are necessary to protect the work and rights of creatives.
- Current AI databases have appropriated the work of millions to billions of creatives, potentially infringing on privacy and sensitive information.
- The widespread use of AI can devalue art and facilitate harmful practices such as creating deepfakes for political manipulation or non-consensual explicit content.
- Opt-out systems are impractical for artists who have hundreds to thousands of their works incorporated into AI systems, as it would be too time-consuming.
- Companies should implement mandatory opt-in systems and strong regulations, including compensation for creatives whose work is used.
- AI regulations should be overseen by the creatives most affected by them, rather than the companies developing the AI, to prevent exploitation and ensure fair practices.

NTIA-2023-0005-0472

online comments

I’m less concerned about what AI can do than how corporations will use it against us. Execs already see people as widgets that cost them money. They won’t use AI to improve products or productivity. They’ll use it to make bigger margins at the cost of our livelihoods.

NTIA-2023-0005-0529

online comments

- AI accountability mechanisms need to effectively address systemic and collective risks of harm, including worker and workplace health and safety, marginalized communities' health and safety, the democratic process, human autonomy, and emergent risks.
- The integration of generative AI tools into downstream products raises questions about how AI accountability mechanisms can inform people about the tools' operations and compliance with trustworthy AI standards.
- The artist expresses concern about the damage caused by AI generative tools to their work and well-being, and the potential for these tools to cause mass misinformation and harmful content.
- There is a growing fear about who will be the next to lose their livelihood due to AI, with concerns about the impact on marginalized people whose income comes from art, music, and writing.
- The artist advocates for AI systems to be regulated by impartial parties, and for data to be used only with the willing consent of individuals.
- The artist highlights the ease with which harmful, fake content can be created using AI, and the potential for this to cause harm and damage to individuals' reputations.
- The artist cites OpenAI CEO Sam Altman's warning about the potential dangers of AI technology.
- The artist calls for urgent action to mitigate the dangers posed by AI, including clear rules prohibiting the use of AI-generated content that could cause unemployment, damage livelihoods, or spread misinformation and propaganda.
- While acknowledging the potential benefits of AI, the artist warns of the harmful environment created by AI content and the potential for everyone to be replaced by AI in the future.

NTIA-2023-0005-0503

online comments

- Concerns exist about plagiarism and copyright issues with AI tools such as Chat GPT, Bard, Dall-E, and Midjourney.
- These tools may scrape artists' work from the internet without consent and use it to generate written or visual outputs.
- Evidence of artists' work can be traced in AI-generated images, raising questions about theft and plagiarism.
- It's currently difficult to determine if AI is genuinely generating its own content or plagiarizing others'.
- There's a belief that work produced by AI should not be copyright protected, with a recent precedent where a copyright claim for AI-generated artwork was denied.
- Publishers, producers, and companies using AI-generated material should be aware that they cannot copyright this material.
- Selling AI-generated material could also pose problems due to copyright issues and potential plagiarism.
- There's a call for companies to disclose if written or visual material is produced using AI to protect consumers from interacting with potentially plagiarized content.

NTIA-2023-0005-0507

online comments

- There is a significant lag in the development of legal regulations for AI.
- Rapid evolution of AI models and their potential misuse can damage trust in social interactions and media.
- Unregulated AI usage on social media platforms has already caused harm.
- Other countries have begun implementing AI regulations.
- AI poses significant privacy risks.
- AI has disrupted intellectual property rights by consuming and replicating original content.
- While AI has potential for good, its current unchecked usage is harmful, especially to data providers.
- There is a need for regulations on AI usage and the handling of its output.
- Tools for detecting AI-created media and content are necessary.
- Protection of intellectual property in the context of AI is crucial.

NTIA-2023-0005-0634

online comments

Artificial Intelligence needs to be limited. It can't be allowed to extend to the point that it replaces valuable jobs within our society; it can be allowed to consume information and churn that information back out without crediting the source. It should not be used in any form of identity theft including manufacturing photos, voice replication, and or the appearance of a person dead or alive.

NTIA-2023-0005-0635

online comments

- AI is not suitable for handling situations like the opioid crisis, which is actually an "Illicitly manufactured Fentanyl crisis".
- The one-size-fits-all approach is inappropriate in this context.
- There have been numerous cases of patients being abruptly tapered or abandoned, leading to suffering and even suicides.
- Decisions about patient care should be made by healthcare workers, not AI or non-medical personnel.
- Over the past 12 years, non-medical decision-making has led to a 60% decrease in opioid prescriptions, pharmacy backorders, and patients being abandoned.
- Patients who are abandoned may turn to street drugs, potentially leading to unintentional consumption of illicitly manufactured Fentanyl and death.
- The number of deaths due to unintentional consumption or planned suicides is unknown, raising questions about the accuracy of data fed into AI.
- Patients need care from qualified, educated, and experienced humans, not algorithms.
- The CDC 2023 guidelines, like the 2016 version, have not been helpful and should be rescinded.
- State medical boards need to listen to patients and knowledgeable doctors.
- The DEA should focus on the real problem, illicitly manufactured Fentanyl, rather than targeting good doctors and pain patients.

NTIA-2023-0005-0664

online comments

AI are is theft and posses a threat to human consent.

NTIA-2023-0005-0668

online comments

- The user is an illustrator who is concerned about the use of AI in relation to art theft and reusage.
- The issue is threatening due to potential financial impacts and the mindset it promotes about art.
- The user acknowledges that AI can have beneficial and ethical applications.
- However, they believe the current use of AI is exploitative and unjust.
- The user is appealing to lawmakers and the government to take action against this issue.