Journal of
Systemics, Cybernetics and Informatics
HOME   |   CURRENT ISSUE   |   PAST ISSUES   |   RELATED PUBLICATIONS   |   SEARCH     CONTACT US
 



ISSN: 1690-4524 (Online)


Peer Reviewed Journal via three different mandatory reviewing processes, since 2006, and, from September 2020, a fourth mandatory peer-editing has been added.

Indexed by
DOAJ (Directory of Open Access Journals)Benefits of supplying DOAJ with metadata:
  • DOAJ's statistics show more than 900 000 page views and 300 000 unique visitors a month to DOAJ from all over the world.
  • Many aggregators, databases, libraries, publishers and search portals collect our free metadata and include it in their products. Examples are Scopus, Serial Solutions and EBSCO.
  • DOAJ is OAI compliant and once an article is in DOAJ, it is automatically harvestable.
  • DOAJ is OpenURL compliant and once an article is in DOAJ, it is automatically linkable.
  • Over 95% of the DOAJ Publisher community said that DOAJ is important for increasing their journal's visibility.
  • DOAJ is often cited as a source of quality, open access journals in research and scholarly publishing circles.
JSCI Supplies DOAJ with Meta Data
, Academic Journals Database, and Google Scholar


Listed in
Cabell Directory of Publishing Opportunities and in Ulrich’s Periodical Directory


Published by
The International Institute of Informatics and Cybernetics


Re-Published in
Academia.edu
(A Community of about 40.000.000 Academics)


Honorary Editorial Advisory Board's Chair
William Lesso (1931-2015)

Editor-in-Chief
Nagib C. Callaos


Sponsored by
The International Institute of
Informatics and Systemics

www.iiis.org
 

Editorial Advisory Board

Quality Assurance

Editors

Journal's Reviewers
Call for Special Articles
 

Description and Aims

Submission of Articles

Areas and Subareas

Information to Contributors

Editorial Peer Review Methodology

Integrating Reviewing Processes


Smart Cities: Challenges and Opportunities
Mohammad Ilyas
(pages: 1-6)

Bridging the Gap: Communicating to Increase the Visibility and Impact of Your Academic Work
Erin Ryan
(pages: 7-12)

Cross-Cultural Online Networking Based on Biomedical Engineering to Motivate Transdisciplinary Communication Skills
Shigehiro Hashimoto
(pages: 13-17)

Interdisciplinary Approaches to Learning Informatics
Masaaki Kunigami
(pages: 18-22)

The Impact of Artificial Intelligence and the Importance of Transdisciplinary Research
R. Cherinka, J. Prezzama, P. O'Leary
(pages: 23-28)

Emotional Communication as Complex Phenomenon in Musical Interpretation – Proposal for a Systemic Model That Promotes a Transdisciplinary Process of Self-Formation and Reflection Around Expressiveness as a Lived Experience
Fuensanta Fernández de Velazco, Eduardo Carpinteyro-Lara, Saúl Rodríguez-Luna
(pages: 29-33)

A Multi-Disciplinary Cybernetic Approach to Pedagogic Excellence
Russell Jay Hendel
(pages: 34-41)

The Ethics of Artificial Intelligence in the Era of Generative AI
Vassilka D. Kirova, Cyril S. Ku, Joseph R. Laracy, Thomas J. Marlowe
(pages: 42-50)

Trans-Disciplinary Communication: Context and Semantics
Maurício Vieira Kritz
(pages: 51-57)

A Brave New World: AI as a Nascent Regime?
Jasmin Cowin, Birgit Oberer, Cristo Leon
(pages: 58-66)

The Role of Art and Science – Relational Dynamics in Human Ecology
Giorgio Pizziolo, Rita Micarelli
(pages: 67-75)

Advancing Entrepreneurship Education: An Integrated Approach to Empowering Future Innovators
Birgit Oberer, Alptekin Erkollar
(pages: 76-81)

Harmonizing Horizons: The Symphony of Human-Machine Collaboration in the Age of AI
Birgit Oberer, Alptekin Erkollar
(pages: 82-86)

How Do Students Learn Artificial Intelligence in Interdisciplinary Field of Biomedical Engineering?
Shigehiro Hashimoto
(pages: 87-91)

What is ChatGPT and its Present and Future for Artificial Intelligence in Trans-Disciplinary Communications?
Richard Segall
(pages: 92-98)


 

Abstracts

 


ABSTRACT


An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making

Jordan Harner, Lydia Ray, Florence Wakoko-Studstill


Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion “Always” or “Frequently” with 77% reporting the warning messages changed their opinion only “Sometimes” or “Never” suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status.

Full Text