Journal of
Systemics, Cybernetics and Informatics
HOME   |   CURRENT ISSUE   |   PAST ISSUES   |   RELATED PUBLICATIONS   |   SEARCH     CONTACT US
 



ISSN: 1690-4524 (Online)


Peer Reviewed Journal via three different mandatory reviewing processes, since 2006, and, from September 2020, a fourth mandatory peer-editing has been added.

Indexed by
DOAJ (Directory of Open Access Journals)Benefits of supplying DOAJ with metadata:
  • DOAJ's statistics show more than 900 000 page views and 300 000 unique visitors a month to DOAJ from all over the world.
  • Many aggregators, databases, libraries, publishers and search portals collect our free metadata and include it in their products. Examples are Scopus, Serial Solutions and EBSCO.
  • DOAJ is OAI compliant and once an article is in DOAJ, it is automatically harvestable.
  • DOAJ is OpenURL compliant and once an article is in DOAJ, it is automatically linkable.
  • Over 95% of the DOAJ Publisher community said that DOAJ is important for increasing their journal's visibility.
  • DOAJ is often cited as a source of quality, open access journals in research and scholarly publishing circles.
JSCI Supplies DOAJ with Meta Data
, Academic Journals Database, and Google Scholar


Listed in
Cabell Directory of Publishing Opportunities and in Ulrich’s Periodical Directory


Published by
The International Institute of Informatics and Cybernetics


Re-Published in
Academia.edu
(A Community of about 40.000.000 Academics)


Honorary Editorial Advisory Board's Chair
William Lesso (1931-2015)

Editor-in-Chief
Nagib C. Callaos


Sponsored by
The International Institute of
Informatics and Systemics

www.iiis.org
 

Editorial Advisory Board

Quality Assurance

Editors

Journal's Reviewers
Call for Special Articles
 

Description and Aims

Submission of Articles

Areas and Subareas

Information to Contributors

Editorial Peer Review Methodology

Integrating Reviewing Processes


Analogical and Logical Thinking – In the Context of Inter- or Trans-Disciplinary Communication and Real-Life Problems
Nagib Callaos, Jeremy Horne
(pages: 1-17)

Artificial Intelligence for Drone Swarms
Mohammad Ilyas
(pages: 18-22)

Brains, Minds, and Science: Digging Deeper
Maurício Vieira Kritz
(pages: 23-28)

Can AI Truly Understand Us? (The Challenge of Imitating Human Identity)
Jeremy Horne
(pages: 29-38)

Comparison of Three Methods to Generate Synthetic Datasets for Social Science
Li-jing Arthur Chang
(pages: 39-44)

Digital and Transformational Maturity: Key Factors for Effective Leadership in the Industry 4.0 Era
Pawel Poszytek
(pages: 45-48)

Does AI Represent Authentic Intelligence, or an Artificial Identity?
Jeremy Horne
(pages: 49-68)

Embracing Transdisciplinary Communication: Redefining Digital Education Through Multimodality, Postdigital Humanism and Generative AI
Rusudan Makhachashvili, Ivan Semenist
(pages: 69-76)

Engaged Immersive Learning: An Environment-Driven Framework for Higher Education Integrating Multi-Stakeholder Collaboration, Generative AI, and Practice-Based Assessment
Atsushi Yoshikawa
(pages: 77-94)

Focus On STEM at the Expense of Humanities: A Wrong Turn in Educational Systems
Kleanthis Kyriakidis
(pages: 95-101)

From Disciplinary Silos to Cyber-Transdisciplinary Networks: A Plural Epistemic Model for AGI-Era Knowledge Production
Cristo Leon, James Lipuma
(pages: 102-115)

Generative AI (Artificial Intelligence): What Is It? & What Are Its Inter- And Transdisciplinary Applications?
Richard S. Segall
(pages: 116-125)

How Does the CREL Framework Facilitate Effective Interdisciplinary Collaboration and Experiential Learning Through Role-Playing?
James Lipuma, Cristo Leon
(pages: 126-145)

Narwhals, Unicorns, and Big Tech's Messiah Complex: A Transdisciplinary Allegory for the Age of AI
Jasmin Cowin
(pages: 146-151)

Playing by Feel: Gender, Emotion, and Social Norms in Overwatch Role Choice
Cristo Leon, Angela Arroyo, James Lipuma
(pages: 152-163)

Responsible Integration of AI in Public Legal Education: Regulatory Challenges and Opportunities in Albania
Adrian Leka, Brunilda Haxhiu
(pages: 164-170)

The Civic Mission of Universities: Transdisciplinary Communication in Practice
Genejane Adarlo
(pages: 171-175)

The Promise and Peril of Artificial Intelligence in Higher Education
James Lipuma, Cristo Leon
(pages: 176-182)

They Learned the Course! Why Then Do They Come to Tutorials?
Russell Jay Hendel
(pages: 183-187)

To Use or Not to Use Artificial Intelligence (AI) to Solve Terminology Issues?
Ekaterini Nikolarea
(pages: 188-195)

Transdisciplinary Supersymmetry: Generative AI in the Vector Space of Postdigital Humanism
Rusudan Makhachashvili, Ivan Semenist
(pages: 196-204)

Why Is Trans-Disciplinarity So Difficult?
Ekaterini Nikolarea
(pages: 205-207)


 

Abstracts

 


ABSTRACT


An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making

Jordan Harner, Lydia Ray, Florence Wakoko-Studstill


Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion “Always” or “Frequently” with 77% reporting the warning messages changed their opinion only “Sometimes” or “Never” suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status.

Full Text