The AI Era: GPU Support Strategies for SMEs and Startups, Accelerating AI Transformation (AX) with Government Initiatives

~14 min read

The AI Era: GPU Support Strategies for SMEs and Startups, Accelerating AI Transformation (AX) with Government Initiatives

Ministry of SMEs and Startups’ GPU Support: A Catalyst for AI Transformation (AX)

The Ministry of SMEs and Startups recently announced its plan to deploy 264 GPUs, secured as part of the ‘National AI Project,’ to foster innovation in small and medium-sized manufacturing sites and startups. This initiative goes beyond merely providing hardware resources; it represents a crucial step towards helping SMEs, the backbone of the Korean economy, successfully transition into the AI era. Specifically, access to high-performance computing resources like the GPU B200 is essential for AI model development and application, serving as a much-needed boost for SMEs that often lack sufficient funding and technological capabilities. However, for this support to yield tangible results, efforts beyond simply providing GPUs are required. Comprehensive support measures, including professional talent development, technical consulting, and customized training programs, must be implemented in parallel to ensure SMEs and startups can effectively utilize these GPUs. Furthermore, solutions must be provided for the technical and legal challenges that may arise during the development of AI models and the establishment of services using GPUs. For instance, guidelines on personal information protection laws, data utilization regulations, and AI ethics issues should be offered, along with opportunities for expert consultation. While the amendment of the SME Technology Innovation Promotion Act in 2023 encourages data-driven decision-making and AI adoption, many SMEs still face difficulties in acquiring and utilizing data. To overcome these challenges, the government should pursue various support programs, such as establishing data sharing platforms, assisting with data processing and analysis, and consulting on AI solution implementation. Additionally, incentives like sharing success stories and recognizing outstanding companies should be provided to encourage SMEs to successfully adopt and leverage AI technologies. According to a 2024 survey by the Korea National Information Society Agency (NIA), the AI adoption rate among domestic SMEs is less than 10%, significantly lower than that of large enterprises. This disparity is attributed to various factors, including a lack of understanding of AI technology, insufficient specialized personnel, and the burden of initial investment costs for SMEs. To address these issues, the government must expand AI technology education and consulting programs and support the costs of AI solution adoption, thereby actively accelerating the AI transformation (AX) of SMEs.

AI Agent Development and Manufacturing Innovation

The Ministry of SMEs and Startups’ support is structured around two main pillars: the ‘AI Agent’ development initiative and the ‘Super-Gap Startup’ nurturing project. Sixty-four GPUs allocated to the ‘AI Agent’ initiative will be utilized for technology development aimed at optimizing processes and quality in small and medium-sized manufacturing facilities. This goes beyond merely boosting productivity; it enables data-driven decision-making and contributes to establishing predictive smart factories. For example, AI agents can analyze real-time production line data to forecast potential defects, detect equipment malfunctions in advance to prevent operational shutdowns, and learn from the expertise of skilled technicians to build automated quality control systems. They can also be used to establish flexible production systems for personalized product manufacturing. However, AI agent development involves a complex process that extends beyond simply using GPUs. It requires various stages, including data collection and refinement, model training, performance evaluation, and system integration, each demanding specialized skills and expertise. Particularly, many small and medium-sized manufacturing sites often have inadequate data collection and management systems and a low understanding of AI technology. Therefore, in addition to supporting AI agent development, the government must also focus on strengthening data capabilities and expanding AI technology education in these manufacturing environments. Furthermore, solutions must be provided for data security, personal information protection, and AI ethics issues that may arise during the AI agent development process. For instance, technologies for data anonymization, enhanced security, and AI ethics guidelines should be offered, along with opportunities for expert consultation. In 2023, the average productivity improvement rate for SMEs participating in the Ministry of Trade, Industry and Energy’s Smart Factory Dissemination and Expansion project was only 20%. This was due to insufficient data collection and analysis system infrastructure required for smart factory implementation and inadequate utilization of AI technology. The government should link AI agent development with the Smart Factory Dissemination and Expansion project to accelerate the establishment of smart factories in small and medium-sized manufacturing sites.

Nurturing ‘Super-Gap Startups’ and Expanding the AI Ecosystem

The 200 GPUs allocated to the ‘Super-Gap Startup’ initiative will be used for strategic AI development, industry-specific AI solution development, and supporting prospective entrepreneurs. This will play a crucial role in enabling startups with innovative ideas to create new markets through AI technology and transform existing industry paradigms. In particular, strategic AI developed in collaboration with universities and government-funded research institutes can contribute to commercializing fundamental research findings and securing core technologies that can lead the global market. Furthermore, industry-specific AI solutions can be utilized to address unique problems in various industrial sectors and create new business models. However, nurturing ‘Super-Gap Startups’ requires a comprehensive support system that goes beyond merely providing GPUs. Startups need support in diverse areas such as business planning, fundraising, technology development, marketing, and international expansion, along with expert mentoring and consulting in each field. AI startups, in particular, may face various challenges, including not only technical difficulties but also legal issues, ethical concerns, and market competition. Therefore, the government must provide customized support programs to help AI startups overcome these challenges and achieve successful growth. For example, this could include education on AI-related laws and regulations, provision of AI ethics guidelines, market analysis and marketing strategy consulting, and investment attraction support. According to a 2023 survey by the Korea Venture Capital Association, the investment volume in domestic AI startups decreased by 30% compared to the previous year. This decline is attributed to various factors, including the global economic downturn, interest rate hikes, and dampened investment sentiment. To revitalize investment in AI startups, the government should expand policy funding, organize investment pitch events, and support activities to attract foreign investors. Additionally, an objective evaluation system must be established to assess the technological capabilities of AI startups and enhance their investment value.

NVIDIA Nemotron Developer Day: The Importance of Nurturing AI Talent

Collaborating Developers: Cultivating Talent for the Future of AI

The ‘NVIDIA Nemotron Developer Day,’ co-hosted by the Ministry of Science and ICT and NVIDIA, is a significant event aimed at strengthening the capabilities of domestic AI developers. Sharing research achievements of NVIDIA’s latest open-source AI model, ‘Nemotron,’ and enhancing practical skills are essential for boosting the competitiveness of Korea’s AI industry. Providing an opportunity to experience ‘full-stack AI technology,’ encompassing everything from GPU infrastructure to data, models, and learning techniques, will be particularly beneficial for developers in honing their real-world problem-solving abilities. However, nurturing AI talent is not achieved solely through participation in events like Developer Day. It requires a foundation of systematic educational systems, opportunities for practical experience, and a continuous learning environment. Specifically, AI curricula at universities and research institutions must shift from theoretical to practical approaches, and opportunities for on-site training should be expanded through collaboration with businesses. Furthermore, online education platforms, support for conference participation, and overseas training programs should be provided to help AI developers keep pace with the latest technological trends and acquire new skills. According to a 2024 survey by the Institute for Information & Communications Technology Planning & Evaluation (IITP), the number of AI professionals in Korea was approximately 25,000 as of 2023, significantly lagging behind advanced countries like the United States and China. Moreover, the qualitative level of AI talent is also assessed as lower than in advanced nations. To cultivate AI talent, the government must strengthen AI education programs at universities and research institutions, expand opportunities for practical experience through industry collaboration, provide various support programs to enhance developers’ capabilities, and implement policies to attract top international talent.

Strengthening Domestic AI Technology Competitiveness and Global Market Expansion

This event will see participation from domestic companies developing their own foundation models, such as SK Telecom, Upstage, ELICE Group, and Motif Technologies, who will discuss strategies for strengthening domestic AI technology competitiveness and expanding into the global market. This will serve as a crucial opportunity for Korean AI companies to collaborate and secure a competitive edge in the global arena. Furthermore, through a hackathon, participants will leverage NVIDIA technology to tackle various challenges, including AI agent-based problem-solving, developing industry-specific models, and generating high-quality data. This will help developers bring creative ideas to fruition and enhance their practical problem-solving skills. However, for domestic AI companies to gain competitiveness in the global market, various factors are required beyond just technological prowess, including business strategy, marketing, and international networks. In particular, as AI technology rapidly changes and evolves, continuous investment in R&D and innovation is essential. Moreover, it is crucial to understand global market trends and provide customized services tailored to local markets. The government should offer diverse support programs to help domestic AI companies secure global competitiveness, such as funding for technology development, consulting for overseas expansion, and support for building international networks. Additionally, it should actively participate in the establishment of international AI standards and promote the global standardization of domestic AI technologies. According to a 2023 survey by the Ministry of Science and ICT, Korea’s AI technology level is approximately 80% compared to the United States, indicating that a technology gap still exists. The government must expand R&D investment and strengthen industry-academia-research collaboration to enhance AI technology competitiveness. Furthermore, it should promote the commercialization of AI technology and pursue policies to foster new AI-based industries.

The Convergence of AI and Web3.0: Sandoll’s New Challenge

Innovating Content Creator Platforms and Securing Future Growth Engines

The font specialist company Sandoll’s establishment of ‘Sandoll Square,’ a new business unit combining AI and Web3.0 technologies, and its public recruitment for a CEO and CTO, is a highly intriguing move. This strategy aims to enhance content production capabilities using AI technology and build a new platform environment where data and services are organically connected through Web3.0 technology. Specifically, Web3.0 refers to the next-generation internet environment, based on blockchain technology, where individuals can directly manage and utilize their own data and digital environments. The convergence of AI and Web3.0 can present new possibilities for content creators. However, the integration of AI and Web3.0 entails various challenges, including not only technical difficulties but also legal, ethical, and user experience issues. In particular, it requires compliance with various legal regulations such as personal information protection, data security, copyright protection, and content censorship. Efforts are also needed to ensure the fairness, transparency, and accountability of AI algorithms. Furthermore, investments in user interfaces, user experience design, and security technologies must be expanded to provide users with a safe and convenient Web3.0 environment. Sandoll Square must strive to overcome these challenges and successfully integrate AI and Web3.0 technologies to offer new value to content creators. For example, it could provide various services such as AI-based automatic font design generation, Web3.0-based font copyright protection, and user-customized font recommendations. Additionally, by building a platform where content creators can directly sell their creations and generate revenue, new business models can be created. The global Web3.0 market size was estimated at approximately 10 trillion Korean Won in 2023 and is projected to grow to about 100 trillion Korean Won by 2030. The Web3.0 market has high growth potential in various sectors, including content, finance, gaming, and social media, and can create even more innovative services through convergence with AI technology. Sandoll Square should leverage the growth potential of the Web3.0 market to secure new growth engines and evolve into a company that leads the global content market.

Google Gemini in Chrome: Ushering in the Browser AI Era

Google Ecosystem Integration and Image Generation Model Implementation

Google’s official launch of ‘Gemini in Chrome’ in Korea, which directly integrates generative AI into the browser, will bring significant changes to the web browsing experience. This update, based on the latest AI model Gemini 3.1 and initially applied to desktop and iOS environments, signifies the evolution of the browser from a simple web page viewing tool into an AI-powered intelligent assistant. Notably, the ability to summarize and analyze content, answer context-based queries, and search past browsing history without interrupting the web surfing flow can dramatically improve the user experience. However, the Gemini in Chrome feature may raise various concerns, including personal data infringement, data misuse, and AI ethics issues. Specifically, sensitive data such as web browsing history, search records, and personal information could be used for AI model training, and biased results might emerge due to AI algorithm biases. Therefore, Google must strengthen its privacy policy for the Gemini in Chrome feature and ensure transparency in data usage. Furthermore, it must continuously monitor and improve AI algorithms to ensure fairness and prevent discriminatory outcomes. Users should make informed decisions when using Gemini in Chrome, based on a thorough understanding of privacy settings, data usage consent, and potential AI algorithm biases. According to a 2023 survey by the Korea Internet & Security Agency (KISA), over 70% of domestic internet users expressed concerns about personal data infringement. They particularly perceive a high possibility of personal data infringement when using services that incorporate AI technology. Google must enhance the privacy protection features of Gemini in Chrome and strive to alleviate users’ anxieties.

AI Regulatory Sandbox: Balancing Innovation and Safety

Promoting AI Technology Development Through Regulatory Innovation

With the rapid advancement of AI technology, the regulatory sandbox system is gaining significant attention. A regulatory sandbox is a framework that allows innovative ideas to be tested by temporarily suspending or relaxing existing regulations for a certain period, especially when new technologies or services are hindered from entering the market by current rules. The AI sector, in particular, benefits greatly from the regulatory sandbox due to its rapid technological changes and frequent conflicts with existing laws. However, while promoting innovative technological development, the regulatory sandbox system must also carefully consider various aspects such as consumer protection, personal data privacy, and social safety. Efforts are needed to minimize potential side effects arising from regulatory relaxation and to ensure safety. For example, to prevent harm from the misuse of AI technology, accountability must be clearly defined, and remedies must be established in case of damage. Furthermore, to minimize the possibility of personal data infringement, transparency in data usage must be ensured, and user consent procedures strengthened. The government must operate the AI regulatory sandbox system by maintaining a balance between innovation and safety, and support the positive societal development of AI technology. According to the Ministry of Science and ICT’s AI regulatory sandbox operation results in 2023, a total of 50 AI-related projects were approved, leading to an investment attraction effect of approximately 20 billion Korean Won. The regulatory sandbox system is evaluated as effective in promoting AI technology development and contributing to the creation of new business models. The government should continuously expand the AI regulatory sandbox system and strengthen policy support to enhance AI technology competitiveness.

Responsible AI Development and Ethical Problem Solving

Given the widespread impact of AI technology across society, responsible AI development and the resolution of ethical issues are paramount. Various ethical problems can arise, such as discrimination due to AI algorithm bias, personal data infringement, job displacement, and the weaponization of AI. Therefore, societal discussion and the establishment of solutions for these issues are necessary. Specifically, AI developers must take responsibility for the societal impact of AI technology and develop it in compliance with ethical standards. Furthermore, they must ensure the transparency of AI technology and be able to explain the operating principles of algorithms. The government should establish AI ethics guidelines and strengthen AI developer education to foster a culture of responsible AI development. Additionally, it must enact legal regulations to prevent the misuse of AI technology and create institutional mechanisms for resolving AI-related disputes. Citizens should enhance their understanding of AI technology and maintain a critical perspective on AI-related issues. They should also actively participate in the AI technology development process and determine the direction of AI technology’s advancement through social consensus. In 2023, the Organization for Economic Co-operation and Development (OECD) published its Recommendations on AI Ethics, setting international standards for the development and use of AI technology. The OECD emphasized ensuring the transparency, accountability, fairness, and safety of AI technology, stating that AI should evolve in a way that respects human dignity and rights. The Korean government should also refer to the OECD AI Ethics Recommendations to establish domestic AI ethics guidelines and ethical standards for AI technology development and utilization.

  • Finance:
  • IT:
  • 🔧 Need to automate your tasks?

    We provide custom automation building services based on AUTOFLOW. Contact Us

    AUTOFLOW

    AUTOFLOW

    Delivering AI and tech insights through automation.
    We build n8n-powered workflow automation solutions.

    Get Automation Consulting →
    𝕏fin