Journal of
Systemics, Cybernetics and Informatics
HOME   |   CURRENT ISSUE   |   PAST ISSUES   |   RELATED PUBLICATIONS   |   SEARCH     CONTACT US
 



ISSN: 1690-4524 (Online)


Peer Reviewed Journal via three different mandatory reviewing processes, since 2006, and, from September 2020, a fourth mandatory peer-editing has been added.

Indexed by
DOAJ (Directory of Open Access Journals)Benefits of supplying DOAJ with metadata:
  • DOAJ's statistics show more than 900 000 page views and 300 000 unique visitors a month to DOAJ from all over the world.
  • Many aggregators, databases, libraries, publishers and search portals collect our free metadata and include it in their products. Examples are Scopus, Serial Solutions and EBSCO.
  • DOAJ is OAI compliant and once an article is in DOAJ, it is automatically harvestable.
  • DOAJ is OpenURL compliant and once an article is in DOAJ, it is automatically linkable.
  • Over 95% of the DOAJ Publisher community said that DOAJ is important for increasing their journal's visibility.
  • DOAJ is often cited as a source of quality, open access journals in research and scholarly publishing circles.
JSCI Supplies DOAJ with Meta Data
, Academic Journals Database, and Google Scholar


Listed in
Cabell Directory of Publishing Opportunities and in Ulrich’s Periodical Directory


Published by
The International Institute of Informatics and Cybernetics


Re-Published in
Academia.edu
(A Community of about 40.000.000 Academics)


Honorary Editorial Advisory Board's Chair
William Lesso (1931-2015)

Editor-in-Chief
Nagib C. Callaos


Sponsored by
The International Institute of
Informatics and Systemics

www.iiis.org
 

Editorial Advisory Board

Quality Assurance

Editors

Journal's Reviewers
Call for Special Articles
 

Description and Aims

Submission of Articles

Areas and Subareas

Information to Contributors

Editorial Peer Review Methodology

Integrating Reviewing Processes


Transfer Learning for Facial Emotion Recognition on Small Datasets
Paolo Barile, Clara Bassano, Paolo Piciocchi
(pages: 1-5)

How to Link Educational Purposes and Immersive Video Games Development? An Ontological Approach Proposal
Nathan Aky
(pages: 6-13)

Application of Building Information Modeling (BIM) in the Planning and Construction of a Building
Renata Maria Abrantes Baracho, Luiz Gustavo da Silva Santiago, Antonio Tagore Assumpção Mendoza e Silva, Marcelo Franco Porto
(pages: 14-19)

Transformative, Transdisciplinary, Transcendent Digital Education: Synergy, Sustainability and Calamity
Rusudan Makhachashvili, Ivan Semenist
(pages: 20-27)

New Online Tools for the Data Visualization of Bivalve Molluscs' Production Areas of Veneto Region
Eleonora Franzago, Claudia Casarotto, Matteo Trolese, Marica Toson, Mirko Ruzza, Manuela Dalla Pozza, Grazia Manca, Giuseppe Arcangeli, Nicola Ferrè, Laura Bille
(pages: 28-32)

Geodata Processing Methodology on GIS Platforms When Creating Spatial Development Plans of Territorial Communities: Case of Ukraine
Olena Kopishynska, Yurii Utkin, Ihor Sliusar, Leonid Flehantov, Mykola Somych, Oksana Yakovlieva, Olena Scryl
(pages: 33-40)

D-CIDE: An Interactive Code Learning Program
Lukas Grant, Matthew F. Tennyson, Jason Owen
(pages: 41-46)

Interdisciplinary Digital Skills Development for Educational Communication: Emergency and Ai-Enhanced Digitization
Rusudan Makhachashvili, Ivan Semenist, Ganna Prihodko, Irina Kolegaeva, Olexandra Prykhodchenko, Olena Tupakhina
(pages: 47-51)

Interdisciplinarity in Smart Systems Applied to Rural School Transport in Brazil
Renata Maria Abrantes Baracho, Mozart Joaquim Magalhães Vidigal, Marcelo Franco Porto, Beatriz Couto
(pages: 52-59)

Peculiarities of the Realization of IT Projects for the Implementation of ERP Systems on the Path of Digitalization of Territorial Communities Activities
Olena Kopishynska, Yurii Utkin, Ihor Sliusar, Khanlar Makhmudov, Olena Kalashnyk, Svitlana Moroz, Olena Kyrychenko
(pages: 60-67)


 

Abstracts

 


ABSTRACT


An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making

Jordan Harner, Lydia Ray, Florence Wakoko-Studstill


Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion “Always” or “Frequently” with 77% reporting the warning messages changed their opinion only “Sometimes” or “Never” suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status.

Full Text