Journal of
Systemics, Cybernetics and Informatics
HOME   |   CURRENT ISSUE   |   PAST ISSUES   |   RELATED PUBLICATIONS   |   SEARCH     CONTACT US
 



ISSN: 1690-4524 (Online)


Peer Reviewed Journal via three different mandatory reviewing processes, since 2006, and, from September 2020, a fourth mandatory peer-editing has been added.

Indexed by
DOAJ (Directory of Open Access Journals)Benefits of supplying DOAJ with metadata:
  • DOAJ's statistics show more than 900 000 page views and 300 000 unique visitors a month to DOAJ from all over the world.
  • Many aggregators, databases, libraries, publishers and search portals collect our free metadata and include it in their products. Examples are Scopus, Serial Solutions and EBSCO.
  • DOAJ is OAI compliant and once an article is in DOAJ, it is automatically harvestable.
  • DOAJ is OpenURL compliant and once an article is in DOAJ, it is automatically linkable.
  • Over 95% of the DOAJ Publisher community said that DOAJ is important for increasing their journal's visibility.
  • DOAJ is often cited as a source of quality, open access journals in research and scholarly publishing circles.
JSCI Supplies DOAJ with Meta Data
, Academic Journals Database, and Google Scholar


Listed in
Cabell Directory of Publishing Opportunities and in Ulrich’s Periodical Directory


Published by
The International Institute of Informatics and Cybernetics


Re-Published in
Academia.edu
(A Community of about 40.000.000 Academics)


Honorary Editorial Advisory Board's Chair
William Lesso (1931-2015)

Editor-in-Chief
Nagib C. Callaos


Sponsored by
The International Institute of
Informatics and Systemics

www.iiis.org
 

Editorial Advisory Board

Quality Assurance

Editors

Journal's Reviewers
Call for Special Articles
 

Description and Aims

Submission of Articles

Areas and Subareas

Information to Contributors

Editorial Peer Review Methodology

Integrating Reviewing Processes


Education 5.0: Using the Design Thinking Process – An Interdisciplinary View
Birgit Oberer, Alptekin Erkollar
(pages: 1-17)

Impact of Artificial Intelligence on Smart Cities
Mohammad Ilyas
(pages: 18-39)

A Multi-Disciplinary Cybernetic Approach to Pedagogic Excellence
Russell Jay Hendel
(pages: 40-63)

Data Management Sharing Plan: Fostering Effective Trans-Disciplinary Communication in Collaborative Research
Cristo Ernesto Yáñez León, James Lipuma
(pages: 64-79)

From Disunity to Synergy: Transdisciplinarity in HR Trends
Olga Bernikova, Daria Frolova
(pages: 80-92)

The Impact of Artificial Intelligence on the Future Business World
Hebah Y. AlQato
(pages: 93-104)

Wi-Fi and the Wisdom Exchange: The Role of Lived Experience in the Age of AI
Teresa H. Langness
(pages: 105-113)

Older Adult Online Learning during COVID-19 in Taiwan: Based on Teachers' Perspective
Ya-Hui Lee, Yi-Fen Wang, Hsien-Ta Cha
(pages: 114-129)

Data Visualization of Budgeting Assumptions: An Illustrative Case of Trans-disciplinary Applied Knowledge
Carol E. Cuthbert, Noel J. Pears, Karen Bradshaw
(pages: 130-149)

The Importance of Defining Cybersecurity from a Transdisciplinary Approach
Bilquis Ferdousi
(pages: 150-164)

ChatGPT, Metaverses and the Future of Transdisciplinary Communication
Jasmin (Bey) Cowin
(pages: 165-178)

Trans-Disciplinary Communication for Policy Making: A Reflective Activity Study
Cristo Leon
(pages: 179-192)

Trans-Disciplinary Communication in Collaborative Co-Design for Knowledge Sharing
James Lipuma, Cristo Leon
(pages: 193-210)

Digital Games in Education: An Interdisciplinary View
Birgit Oberer, Alptekin Erkollar
(pages: 211-230)

Disciplinary Inbreeding or Disciplinary Integration?
Nagib Callaos
(pages: 231-281)


 

Abstracts

 


ABSTRACT


An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making

Jordan Harner, Lydia Ray, Florence Wakoko-Studstill


Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion “Always” or “Frequently” with 77% reporting the warning messages changed their opinion only “Sometimes” or “Never” suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status.

Full Text