TranSalin is an Android mobile application that translates text from photos and displays results as an overlay. Using Google’s ML Kit APIs, the application works offline to resolve the need for an internet connection or subscription pay- ment to access translation services. It provides additional features like toggling overlay visibility, romanizing Chinese script, copying text, listening to the text, and saving the image. TranSalin passed the usability test with an above-average rating of 83.92 SUS score. The application needs further improvement in performance speed, recognition accuracy, and translation quality.
This paper presents AutoCard, a program that aims to lessen the load of sentence mining in learning a language through immersion; where sentence mining, is the process of tak- ing words or grammar structures from immersion and learning them through a spaced-repetition system. Given the target words to learn and websites to search on, this program will search and return sets of sentences that contains these words using Scrapy, a web scraping framework in Python and Twitter’s API for scraping tweets from native speakers. The selected sentences out of each set is then added to a deck of sentences for later reviews using the program’s study function, which simulates a flashcard learning method. A System Usability Scale was conducted with 15 respondents, which resulted with a score of 70%. With a score over 50.9, this shows that the program has a rating of Good in terms of usability.
Due to today’s digital era, photo printing industry has been declining. To make simple photos more engaging, we innovated the use of blind watermarking which incorporates additional media such as video or audio to images and scans it through this application. The blind watermarking technique converts the RGB image into its YCbCr components then undergoes Discrete Wavelength Transform (DWT) to embed the QR code generated from uploading the media. Through an exploratory research and purposive sampling, the respondents were tasked to test the app and answer the System Usability Scale questionnaire which resulted to an average score. The application received positive reviews despite the issues noticed by the users in the image output and scanning process.
The study addresses the challenge of finding com- patible roommates among college students around University of the Philippines Los Baños, presenting a solution in the form of ”BunkUP,” a Flutter and Firebase-based mobile application for roommate-finding. Leveraging features adapted from existing matchmaking applications, the application aims to streamline the process of locating an ideal roommate based on individual preferences and lifestyle. The methodology involves content- based filtering to generate roommate suggestions. Evaluating the application on University of the Philippines Los Baños students using the System Usability Scale yielded a high score of 92.05, indicating strong usability and user-friendliness. Feedback further confirmed the application’s effectiveness in simplifying roommate selection. The successful integration of Flutter and Firebase contributed to enhanced functionality and user experience, emphasizing the potential of this approach in addressing the roommate-finding challenges faced by college students.
Closed-circuit television (CCTV) cameras have been widely used for security purposes, to ensure the safety of the public. Quickly assessing the interaction between two people as either hostile or non-hostile could help prevent a life-and- death situation through intervention from authorities. This paper discussed the use of pose estimation to detect actions as hostile or non-hostile, hostile actions being choking, hold-up, hostage, punching, and kicking, and non-hostile actions being cheek-to- cheek, dancing, handshaking, hugging, and talking. The training dataset, taken from recorded videos, contains 5000 images, with each class having 500 images. YOLOv4 was used as the machine learning model. The overall accuracy of the model in classifying them as either Hostile or Non-Hostile is 71.83%, while the overall accuracy of the model in detecting each action correctly and classifying them as either Hostile or Non-Hostile is 43.58%. It can be concluded that the model had some difficulty in differentiating actions but it can identify if the action is Hostile or Non-Hostile.
Results found in 0.0007145404815673828 seconds..