Journal of
Systemics, Cybernetics and Informatics
HOME   |   CURRENT ISSUE   |   PAST ISSUES   |   RELATED PUBLICATIONS   |   SEARCH     CONTACT US
 



ISSN: 1690-4524 (Online)


Peer Reviewed Journal via three different mandatory reviewing processes, since 2006, and, from September 2020, a fourth mandatory peer-editing has been added.

Indexed by
DOAJ (Directory of Open Access Journals)Benefits of supplying DOAJ with metadata:
  • DOAJ's statistics show more than 900 000 page views and 300 000 unique visitors a month to DOAJ from all over the world.
  • Many aggregators, databases, libraries, publishers and search portals collect our free metadata and include it in their products. Examples are Scopus, Serial Solutions and EBSCO.
  • DOAJ is OAI compliant and once an article is in DOAJ, it is automatically harvestable.
  • DOAJ is OpenURL compliant and once an article is in DOAJ, it is automatically linkable.
  • Over 95% of the DOAJ Publisher community said that DOAJ is important for increasing their journal's visibility.
  • DOAJ is often cited as a source of quality, open access journals in research and scholarly publishing circles.
JSCI Supplies DOAJ with Meta Data
, Academic Journals Database, and Google Scholar


Listed in
Cabell Directory of Publishing Opportunities and in Ulrich’s Periodical Directory


Published by
The International Institute of Informatics and Cybernetics


Re-Published in
Academia.edu
(A Community of about 40.000.000 Academics)


Honorary Editorial Advisory Board's Chair
William Lesso (1931-2015)

Editor-in-Chief
Nagib C. Callaos


Sponsored by
The International Institute of
Informatics and Systemics

www.iiis.org
 

Editorial Advisory Board

Quality Assurance

Editors

Journal's Reviewers
Call for Special Articles
 

Description and Aims

Submission of Articles

Areas and Subareas

Information to Contributors

Editorial Peer Review Methodology

Integrating Reviewing Processes


Utilization of Artificial Intelligence by Students in Interdisciplinary Field of Biomedical Engineering
Shigehiro Hashimoto
(pages: 1-5)

Transdisciplinary Applications of Data Visualization and Data Mining Techniques as Represented for Human Diseases
Richard S. Segall
(pages: 6-15)

Beyond Status Quo: Why is Transdisciplinary Communication Instrumental in Innovation?
James Lipuma, Cristo Leon
(pages: 16-20)

How We Can Locate Validatable Foundations of Life Themes
Jeremy Horne
(pages: 21-32)

Bringing Discipline into Transdisciplinary Communications -The ISO 56000 Family of Innovation Standards-
Rick Fernandez, William Swart
(pages: 33-39)

To AI Is Human: How AI Tools with Their Imperfections Enhance Learning
Martin Cwiakala
(pages: 40-46)

Knowledge, Learning and Transdisciplinary Communication in the Evolution of the Contemporary World
Rita Micarelli, Giorgio Pizziolo
(pages: 47-52)

Human Complexity vs. Machine Linearity: Tug-of-War Between Two Realities Coexisting in Precarious Balance
Paolo Barile, Clara Bassano, Paolo Piciocchi
(pages: 53-62)

A Cybernetic Metric Approach to Course Preparation
Russell Jay Hendel
(pages: 63-70)

The Impact of Artificial Intelligence on Education
John Jenq
(pages: 71-76)

Bridging the Gap: Harnessing the Power of Machine Learning and Big Data for Media Research
Li-jing Arthur Chang
(pages: 77-84)

Image Processing, Computer Vision, Data Visualization, and Data Mining for Transdisciplinary Visual Communication: What Are the Differences and Which Should or Could You Use?
Richard S. Segall
(pages: 85-92)

Identification – The Essence of Education
Jeremy Horne
(pages: 93-99)

The Greek-Roman Theatre in the Mediterranean Area
Maria Rosaria D’acierno Canonici Cammino
(pages: 100-108)

Examination of AI and Conventional Teaching Approaches in Cultivating Critical Thinking Skills in High School Students
Luis Castillo
(pages: 109-112)

Thoughts, Labyrinths, and Torii
Maurício Vieira Kritz
(pages: 113-119)

Can Two Human Intelligences (HIs or Noes) and Two Artificial Intelligences (AIs) Get Involved in Interlinguistic Communication? – A Transdisciplinary Quest
Ekaterini Nikolarea
(pages: 120-128)


 

Abstracts

 


ABSTRACT


An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making

Jordan Harner, Lydia Ray, Florence Wakoko-Studstill


Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion “Always” or “Frequently” with 77% reporting the warning messages changed their opinion only “Sometimes” or “Never” suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status.

Full Text