Research Article | | Peer-Reviewed

The Vague Future of AI: The Theory of AI Perfection

Received: 11 January 2024    Accepted: 31 January 2024    Published: 29 February 2024
Views:       Downloads:
Abstract

Artificial intelligence (AI) is becoming increasingly accessible to the general public. There is an ongoing debate regarding the implications of widespread AI adoption. Some argue that placing advanced AI systems in the hands of the general public could have dangerous consequences if misused either intentionally or unintentionally. Others counter that AI can be safe and beneficial if developed and deployed responsibly. This paper explores both sides of this complex issue. On the one hand, broad AI availability could boost productivity, efficiency, and innovation across industries and domains. Individuals may benefit from AI assistants that help with tasks like scheduling, research, content creation, recommendations, and more personalized services. However, without proper safeguards and oversight, AI could also be misused to spread misinformation, manipulate people, or perpetrate cybercrime. And if AI systems become extremely advanced, there are risks related to the alignment of AI goal systems with human values. On the other hand, with thoughtful coordination between policymakers, researchers, companies, and civil society groups, AI can be developed safely and for the benefit of humanity. Ongoing research into AI safety and ethics is crucial, as are governance frameworks regarding areas like data privacy, algorithmic transparency, and accountability. As AI becomes more deeply integrated into products and platforms, best practices should be established regarding appropriate use cases, human oversight, and user empowerment. With conscientious, ethical implementation, AI can empower individuals and enhance society. But key issues around alignment, security, and governance must be proactively addressed to minimize risks as advanced AI proliferates. This will likely require evolving perspectives, policies, and scientific breakthroughs that promote innovation while putting human interests first.

Published in American Journal of Computer Science and Technology (Volume 7, Issue 1)
DOI 10.11648/j.ajcst.20240701.14
Page(s) 24-28
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

AI, Perfection, Artificial Intelligence

References
[1] “The term of AI history”. Available from: https://claude.ai/. [Accessed 1 January 2024]
[2] “Potential and risks of AI”. Available from: https://chat.openai.com/. [Accessed 1 January 2024]
[3] “Potential and risks of AI”. Available from: https://bard.google.com/. [Accessed 1 January 2024]
[4] (Teplan 2002) Teplan, M. (2002). "Fundamentals of EEG measurement." Measurement science review 2(2): 1-11.
[5] “The automatic car in base of AI”. Available from: https://www.amerandish.com/. [Accessed 11 January 2024]
[6] “The risks of AI”. Available from: https://www.digikala.com/. [Accessed 1 January 2023]
[7] “The risks of AI”. Available from: https://www.didbaan.com/. [Accessed 1 January 2023]
[8] Werbos, P. J. (1988), "Generalization of backpropagation with application to a recurrent gas market model" (https://zenodo.org/record/1258627), Neural Networks, 1 (4): 339–356.
[9] Gers, Felix A.; Schraudolph, Nicol N.; Schraudolph, Jürgen (2002). "Learning Precise Timing with LSTM Recurrent Networks" (http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf) (PDF). Journal of Machine Learning Research. 3: 115–143. Retrieved 2017-06-13.
[10] Deng, L.; Yu, D. (2014). "Deep Learning: Methods and Applications" (http://research.microsoft.com/pubs/209355/DeepLearning-NowPublishing-Vol7-SIG-039.pdf) (PDF). Foundations and Trends in Signal Processing. 7 (3–4): 1–199.
[11] Schulz, Hannes; Behnke, Sven (1 November 2012). "Deep Learning" (https://www.semanticscholar.org/paper/51a80649d16a38d41dbd20472deb3bc9b61b59a0). KI.
[12] McCorduck, Pamela (2004). Machines who think: a personal inquiry into the history and prospects of artificial intelligence (25th anniversary update ed.). Natick, Mass: A. K. Peters. OCLC 52197627.
[13] Crevier, Daniel (1993). AI: the tumultuous history of the search for artificial intelligence. New York, NY: Basic Books. pp. 209–210.
[14] Matti, D.; Ekenel, H. K.; Thiran, J. P. (2017). Combining LiDAR space clustering and convolutional neural networks for pedestrian detection. 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). pp. 1–6. arXiv: 1710.06160. https://doi.org/10.1109/AVSS.2017.8078512. ISBN 978-1-5386-2939-0.
[15] John R. Searle. «Minds, Brains, and Programs». The Behavioral and Brain Sciences, vol. 3. Copyright 1980 Cambridge University Press.
Cite This Article
  • APA Style

    Sheikhzadeh, M., Amirmohammad-Bakhtiari, Nourmandipour, P. (2024). The Vague Future of AI: The Theory of AI Perfection. American Journal of Computer Science and Technology, 7(1), 24-28. https://doi.org/10.11648/j.ajcst.20240701.14

    Copy | Download

    ACS Style

    Sheikhzadeh, M.; Amirmohammad-Bakhtiari; Nourmandipour, P. The Vague Future of AI: The Theory of AI Perfection. Am. J. Comput. Sci. Technol. 2024, 7(1), 24-28. doi: 10.11648/j.ajcst.20240701.14

    Copy | Download

    AMA Style

    Sheikhzadeh M, Amirmohammad-Bakhtiari, Nourmandipour P. The Vague Future of AI: The Theory of AI Perfection. Am J Comput Sci Technol. 2024;7(1):24-28. doi: 10.11648/j.ajcst.20240701.14

    Copy | Download

  • @article{10.11648/j.ajcst.20240701.14,
      author = {Morteza Sheikhzadeh and Amirmohammad-Bakhtiari and Parham Nourmandipour},
      title = {The Vague Future of AI: The Theory of AI Perfection},
      journal = {American Journal of Computer Science and Technology},
      volume = {7},
      number = {1},
      pages = {24-28},
      doi = {10.11648/j.ajcst.20240701.14},
      url = {https://doi.org/10.11648/j.ajcst.20240701.14},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajcst.20240701.14},
      abstract = {Artificial intelligence (AI) is becoming increasingly accessible to the general public. There is an ongoing debate regarding the implications of widespread AI adoption. Some argue that placing advanced AI systems in the hands of the general public could have dangerous consequences if misused either intentionally or unintentionally. Others counter that AI can be safe and beneficial if developed and deployed responsibly. This paper explores both sides of this complex issue. On the one hand, broad AI availability could boost productivity, efficiency, and innovation across industries and domains. Individuals may benefit from AI assistants that help with tasks like scheduling, research, content creation, recommendations, and more personalized services. However, without proper safeguards and oversight, AI could also be misused to spread misinformation, manipulate people, or perpetrate cybercrime. And if AI systems become extremely advanced, there are risks related to the alignment of AI goal systems with human values. On the other hand, with thoughtful coordination between policymakers, researchers, companies, and civil society groups, AI can be developed safely and for the benefit of humanity. Ongoing research into AI safety and ethics is crucial, as are governance frameworks regarding areas like data privacy, algorithmic transparency, and accountability. As AI becomes more deeply integrated into products and platforms, best practices should be established regarding appropriate use cases, human oversight, and user empowerment. With conscientious, ethical implementation, AI can empower individuals and enhance society. But key issues around alignment, security, and governance must be proactively addressed to minimize risks as advanced AI proliferates. This will likely require evolving perspectives, policies, and scientific breakthroughs that promote innovation while putting human interests first.
    },
     year = {2024}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - The Vague Future of AI: The Theory of AI Perfection
    AU  - Morteza Sheikhzadeh
    AU  - Amirmohammad-Bakhtiari
    AU  - Parham Nourmandipour
    Y1  - 2024/02/29
    PY  - 2024
    N1  - https://doi.org/10.11648/j.ajcst.20240701.14
    DO  - 10.11648/j.ajcst.20240701.14
    T2  - American Journal of Computer Science and Technology
    JF  - American Journal of Computer Science and Technology
    JO  - American Journal of Computer Science and Technology
    SP  - 24
    EP  - 28
    PB  - Science Publishing Group
    SN  - 2640-012X
    UR  - https://doi.org/10.11648/j.ajcst.20240701.14
    AB  - Artificial intelligence (AI) is becoming increasingly accessible to the general public. There is an ongoing debate regarding the implications of widespread AI adoption. Some argue that placing advanced AI systems in the hands of the general public could have dangerous consequences if misused either intentionally or unintentionally. Others counter that AI can be safe and beneficial if developed and deployed responsibly. This paper explores both sides of this complex issue. On the one hand, broad AI availability could boost productivity, efficiency, and innovation across industries and domains. Individuals may benefit from AI assistants that help with tasks like scheduling, research, content creation, recommendations, and more personalized services. However, without proper safeguards and oversight, AI could also be misused to spread misinformation, manipulate people, or perpetrate cybercrime. And if AI systems become extremely advanced, there are risks related to the alignment of AI goal systems with human values. On the other hand, with thoughtful coordination between policymakers, researchers, companies, and civil society groups, AI can be developed safely and for the benefit of humanity. Ongoing research into AI safety and ethics is crucial, as are governance frameworks regarding areas like data privacy, algorithmic transparency, and accountability. As AI becomes more deeply integrated into products and platforms, best practices should be established regarding appropriate use cases, human oversight, and user empowerment. With conscientious, ethical implementation, AI can empower individuals and enhance society. But key issues around alignment, security, and governance must be proactively addressed to minimize risks as advanced AI proliferates. This will likely require evolving perspectives, policies, and scientific breakthroughs that promote innovation while putting human interests first.
    
    VL  - 7
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Shamsipour Electrical Engineering Department, Shamsipour College, Tehran, Iran

  • Shamsipour Electrical Engineering Department, Shamsipour College, Tehran, Iran

  • Shamsipour Electrical Engineering Department, Shamsipour College, Tehran, Iran

  • Sections