Deregulation, Copyright, AI: Historical Perspectives, Current Developments, Ethical and Social Challenges. From Dunford to JD Vance, Through OpenAI | Festina Lente - Your leading source of AI news | Turtles AI

Deregulation, Copyright, AI: Historical Perspectives, Current Developments, Ethical and Social Challenges. From Dunford to JD Vance, Through OpenAI
Copyright and the protection of intellectual property on one side. Economic and political competitiveness on the other. That is what Dunford maintained years ago. Today, JD Vance and OpenAI reiterate it. How far can the bar be raised?
DukeRem19 March 2025

 

 

Copyright and the protection of intellectual property on one side. Economic and political competitiveness on the other.

This is what Dunford, former Chairman of the Joint Chiefs of Staff, maintained years ago. It is also what OpenAI demands today. To underscore the current relevance of the issue, just today JD Vance reiterated the importance of a regulatory exemption to unlock the sector’s innovative potential.

In the following indepth analysis, you will discover why the tension between the protection of creative works and the need to leverage data to train artificial intelligence systems lies at the heart of a global strategic challenge.

We will examine how overly rigid restrictions can stifle innovation, compromising the ability of algorithms to respond swiftly in critical situations and potentially endangering the technological leadership of the United States in a contest where China is poised to capitalize on every available margin of data.

We will further explore the implications of the proposals advanced by OpenAI, illustrating how a revision of the regulatory framework could be the key to achieving a balance between the protection of intellectual property rights and competitive progress. Reading this article will reveal how every regulatory decision can impact the future of security and global innovation. Enjoy reading, and as always, follow Turtle’s AI daily for uptodate news on AI and technology, as well as other exclusive indepth analyses like this one.

Joseph Dunford has embodied a prominent figure in the United States armed forces, having served as a Marine Corps general and as Chairman of the Joint Chiefs of Staff until 2019—a role in which he underscored the growing importance of emerging technologies in national defense.

During his career, Dunford expressed a clear strategic vision regarding global competition, firmly asserting that artificial intelligence would assume a central role in the contest between the United States and China within six or seven years—a prediction that remains extremely relevant today.

His observations were based on a careful evaluation of the factors that determine military superiority, where the ability to make rapid and precise decisions in conflict situations represents a decisive advantage. In this context, artificial intelligence not only supports data analysis but becomes an essential tool for formulating realtime strategies, enabling swift responses to operational developments.

However, such an approach requires access to enormous amounts of data—an element identified as one of the main obstacles for the United States compared to China. The primary reason lies in the difference in population and in the regulatory constraints that limit data usage, including privacy laws, copyright, and other legal restrictions.

The ability to train complex algorithms depends on a continuous flow of information, and the availability of quality data can significantly impact the effectiveness of AI applications in the military sphere. These observations not only illustrate a rapidly changing technological reality but also highlight the strategic dilemma faced by a country that has always relied on innovation as its strength. The contrast between the rigidity of regulatory constraints and the need to fully exploit the potential of technology represents a challenge that involves technical, ethical, and operational aspects.

Dunford’s analysis, more relevant than ever, fits into a broader debate in which considerations of national security intertwine with data protection and the safeguarding of individual rights, creating a complex and constantly evolving situation. The competitive landscape between the United States and China emerges as a multidimensional challenge that encompasses technological innovation, military strategy, and regulatory policy—elements that interact in a complex manner with no room for compromise.

Dunford’s assessment highlights a structural gap that pits two different systems against each other: on one hand, the American model that prioritizes data protection and adherence to privacy and intellectual property regulations; on the other, the Chinese approach that allows broader access to information derived from a large population and less restrictive data management policies.

This disparity translates into a potential advantage for China in training artificial intelligence systems, which, in conflict situations, must process complex scenarios and provide responses in extremely short timeframes. Operational precision and rapid decision-making—essential requirements in the military field—largely depend on the quantity and quality of processed information, making data availability a decisive factor in the ongoing technological warfare. Analysts emphasize that the numerical data gap, collectively limited by demographic size and domestic regulations, forces the United States into a race against time to innovate and find alternative solutions to compensate for the lack of robust datasets.

American military institutions have engaged in collaborations with the private sector and universities, seeking to integrate AI models developed in the civilian sphere into military applications, even as they contend with a regulatory framework that does not favor the free circulation of data. The discussion does not end with a mere technological comparison but also encompasses ethical and legal issues concerning the balance between national security and individual rights, creating a tension that demands thoughtful decisions and targeted interventions.

Dunford’s strategic assessments, supported by a detailed analysis of neural network training mechanisms, invite a profound reflection on the role of data in defining future military power, emphasizing that the ability to transform information into operational decisions is an invaluable asset in a global environment characterized by fierce and continuously evolving competition. Innovations in the field of artificial intelligence are unfolding within a context of radical transformations that affect numerous sectors, but the military domain represents an area where every technological advantage translates into concrete operational capabilities. Analyses conducted by security and technology experts underscore the importance of developing systems capable of interpreting heterogeneous and complex data, able to provide immediate responses during crisis situations.

The efficiency of algorithms, processing speed, and the ability to learn from realworld situations are key elements that can determine the outcome of strategic operations. AI systems, thanks to their modular architecture, allow for the integration of information from multiple sources—such as satellite images, field sensors, and interforce communications—creating a synergistic data network that can be leveraged to enhance operational readiness.

The disparity between the amount of data available in the United States and that collected in China is not only a matter of the number of sources but also involves differences in the quality and accessibility of information. Regulatory policies adopted in the American context aim to protect privacy and respect intellectual property rights—elements that, although fundamental to safeguarding citizens, limit the use of extensive datasets for military applications.

Conversely, the Chinese approach, although it raises ethical concerns, offers conditions that favor rapid data acquisition and integration. This dynamic fosters a rivalry that extends beyond technological boundaries, influencing geopolitical strategies and the design of defense policies. American military institutions must therefore contend with an environment in which the tradition of innovation clashes with the need to adhere to stringent regulatory standards, necessitating a reevaluation of collaboration methods between the public and private sectors.

The integration of advanced methodologies for generating synthetic data, simulating operational scenarios, and sharing information on an international level represents possible solutions that, although not without challenges, offer a way out of a dilemma that could compromise military supremacy in an evolving global context. The potential of AI technologies in the military realm is manifested through applications ranging from surveillance to command and control, integrating automated systems that operate in dynamic and unpredictable environments. The ability to analyze data in real time, identify patterns, and provide strategic recommendations holds significance that transcends the technological sphere, directly affecting operational planning and resource management in conflict situations.

Dunford’s expressed vision underlines a critical point: the disparity in data availability is not merely a quantitative issue but substantially affects the quality of machine learning and, consequently, the precision of decisions. In a system that relies on machine learning, the ability to generalize from concrete examples determines the reliability of the models employed. The United States, despite boasting advanced infrastructure and a tradition of excellence in research and development, faces structural limitations arising from stringent regulations that protect personal data and confidential information. These restrictions, while safeguarding fundamental rights, result in a reduction of the available information to fuel AI systems, in contrast to an environment in which China can rely on a critical mass of data.

This difference translates into a strategic advantage, as the operational effectiveness of algorithms largely depends on the variety and richness of the training material.

Just today, JD Vance, the current USA Vice President, stated that easing artificial intelligence regulations represents an essential strategic move to bolster United States competitiveness, arguing that freeing the sector from tight regulations can benefit both technological innovation and the wellbeing of American workers.

This difference translates into a strategic advantage, as the operational effectiveness of algorithms largely depends on the variety and richness of the training material.

In this regard, OpenAI has recently submitted a formal request to the United States government regarding the possibility of training AI models using copyrighted content as well—a claim that fits within a strategy aimed at ensuring American technological competitiveness. OpenAI argues that the copyright regime, based on the principle of fair use, is an essential pillar for the development of innovation, as it allows algorithms to learn from a multitude of data, transforming protected works into foundational elements for training without undermining the commercial value of the original sources.

This approach has been presented as a strategic necessity to prevent overly restrictive regulations from compromising the United States’ ability to maintain a competitive edge compared to countries where access to data—even protected data—is less limited. OpenAI contends that limiting the training of models on copyrighted content would mean forgoing a fundamental component of machine learning, as the variety and richness of textual and multimedia material represent indispensable resources for developing algorithms capable of operating in complex situations and making decisions within short timeframes.

The proposal is based on an analysis that shows how greater availability of data, even if protected, can translate into a qualitative leap in the precision and speed of AI applications—a crucial element in the military and national security domains.

OpenAI further emphasizes that the U.S. regulatory system, with its focus on protecting individual rights and intellectual property, must not undermine the ability to innovate in an everevolving sector, but rather must find a balance that allows full exploitation of the potential offered by data.

In line with this vision, JD Vance has indeed pointed out that a reduction in regulations can unleash the innovative potential of American companies, strengthening the country’s technological leadership and arguing that such an approach promotes economic growth and employment.

The request to the government thus aims to define a regulatory framework that allows operations within the principles of fair use, while simultaneously ensuring respect for rights holders and encouraging technological research and development.

OpenAI highlights that in a context of increasing international competition—especially against China, which has access to a critical mass of data without similar restrictions—adopting a more flexible policy regarding the use of protected content is not only a matter of economic competitiveness but also of national security.

The argument is based on the observation that if the United States fails to ensure access to a broad range of information for training models, the gap between the quality of domestically developed algorithms and those of foreign competitors will narrow in a concerning manner. Furthermore, OpenAI proposes that the government engage in an open dialogue with private sector stakeholders with the goal of establishing clear guidelines that balance the need to protect copyright with the need to innovate responsibly and sustainably.

According to the company, this initiative does not represent a departure from the fundamental principles of intellectual property protection but rather a necessary revision that allows developers to harness essential resources without incurring punitive practices that could slow technological progress. In summary, OpenAI’s request is intended to preserve American excellence in the field of artificial intelligence, ensuring that the regulatory system supports the transformation of data into operational tools capable of effectively addressing global challenges and defending the United States’ strategic position on the international stage.

Experts note that technological competition is not isolated from geopolitical considerations, as the control of information and the ability to translate it into military power are interconnected aspects that require coordinated interventions at both domestic and foreign policy levels. The challenge unfolds on multiple fronts, involving the evolution of technologies, the definition of regulatory standards, and the need to preserve democratic values, creating a complex landscape in which every decision holds significance beyond mere technological progress. The implications of this strategic analysis highlight a scenario in which the dynamics of technological innovation are intertwined with political and regulatory choices, offering neither easy solutions nor unequivocal answers.

The focus on the role of data and the way it is managed within complex systems suggests that the race for artificial intelligence is a multilevel phenomenon capable of reshaping the logic of the “balance of power.

Dunford’s assessment stimulates a debate that extends into the realms of research, security, and governance, urging decisionmakers to reassess operational strategies and information management policies in a context characterized by unprecedented challenges.

By integrating quantitative and qualitative data, the analysis examines the sustainability of technological growth and the possibility of innovating responsibly, maintaining a balance between development and regulatory rigor, and stimulating a debate that carefully considers the opportunities and operational constraints—such a vision significantly amplifies the international dialogue.