In the last exercise, we used an application developed by Martin Hawksey to collect tweets and their metadata using the Twitter Search API and Google’s API. Before carrying on, I would raise an issue from the exercise 4, which was about creating an application to obtain access to Twitter’s API. The second step of creating the application was to fill in an online form, which include several identifying fields and ended up with the developer agreement. The main concept of this agreement is that as we will have an access to the Twitter’s API, we have to comply with the rules and conditions mentioned in the agreement. Any attempt to misconduct or abuse this agreement, the user account might be suspended or blocked.
Here is a part of the agreement:
You will not attempt to exceed or circumvent limitations on access, calls and use of the Twitter API (“Rate Limits”), or otherwise use the Twitter API in a manner that exceeds reasonable request volume, constitutes excessive or abusive usage, or otherwise fails to comply or is inconsistent with any part of this Agreement. If you exceed or Twitter reasonably believes that you have attempted to circumvent Rate Limits, controls to limit use of the Twitter APIs or the terms and conditions of this Agreement, then your ability to use the Licensed Materials may be temporarily suspended or permanently blocked. Twitter may monitor your use of the Twitter API to improve the Twitter Service and to ensure your compliance with this Agreement.
There is one issue about this agreement, why there is a limitation in the information although it is already in the public domain. However, they are entitled to monitor my account or they might sign an agreement with any organization to use our information for any purposes. To be honest, I don’t mind that my Twitter’s account will be accessed by anyone because the social media is to network with others, so why Twitter impose restrictions on the information they release.
Ernesto Priego said in his post Publicly available data from Twitter is public evidence and does not necessarily constitute an “ethical dilemma”.
“There is a wealth of information in a tweet’s metadata that can be beneficial for research in fields other than the Life Sciences. The act of archiving and disseminating public information publicly does not have to be cause for an “ethical dilemma”, as long as the archived and disseminated information was public in the first instance”.
Martin Hawksey has reported in one of his posts the development in Twitter Archiving Google Spreadsheet (TAGS), and he said the biggest change for TAGS is that all requests now need authenticated access. Back to exercise 5, I’ve went through the instructions and everything was fine, in step 27 I searched for the hashtag #citylis , and then I did steps 28,29 and 30 successfully. When I got the archive of this hashtag for the last 7 days and then moved to step 31 and 32 to carry on, the screen was suspended once I clicked on Add summary sheet. I’ve repeated this exercise three times, and in each time I faced the same problem. However, from step 30, I obtained a summary of the archive of this hashtag, and we can say most of the tweets were between 9 and 11 in the morning, and 16 to 21 in the evening. This service provides me with vital information about the best time of my friends to participate in this hashtag. The question now is what other information we can obtain from Twitter’s API, and to what extent we could access these information.
Over the past years, substantial effort has been carried out to develop web services and languages used to structure websites and to described data. The development in these areas has made the access and the presentation of information much easier and sophisticated. When we look at web1, which was for read only, this old version doesn’t allow developer to get access to the systems of service providers. Web2 on the other hand, is the upgrade version of web1, in which developer have more chances to access and combine services from different service providers. Web 2.0 tools are seen to have tremendous potential for both individuals and business. These tools help the users to be in touch with the whole world easily and share information within seconds. They also help in business in a way by providing easy access to the websites and faster transactions.
The best example of web2 is the social networks such as Facebook, Linkedin and Twitter and their application on the mobile phone. The development in web services is the driving force for the revolution in the social software to create new forms of social networks and communities. The explosive growth of social media usage and the movement of a variety of services towards social media suggest that society has become “comfortable” engaging in these activities. These phenomena have created a need for new software mechanisms, which is reflected by the exponential growth of the market of social applications. Accordingly it is wise and fair to exploit the social media tools available in order to create bespoke platforms customized to satisfy specific needs and objectives.
The question now, are we ready to web3, where data formats, protocol and software platform are open for developer to not just read or write, but either to create a new tools. The better understanding of web2 would help us to understand the technology of web3.
The revolution in Internet and Computer science has led to substantial improvement in the storage of information and its accessibility. Information retrieval has made libraries and search engines easily work for everyone, individuals and businesses. In my opinion, the two main advantages of information retrieval are firstly, providing users with relevant information they need from different resources, and secondly supporting them in making decisions on relevant issues by reducing information overload.
However, the issue I would raise here is the ability of these websites and search engines to continue to function well as they have limited size and volume. How these webs and search engines can face the increase of data volume, the diversity of the ways of information dispersion across the glob, and huge amount of information, visual and texts, from the daily news and the advancement of science and technology.
How to provide scalable performance for rapidly increasing data and workloads is critical in the design of next generation of information retrieval systems
Scalability is one of the problems that face computer applications and Internet websites by limiting their ability to provide users with continuing good services. Therefore, new methods and technology of computers, information retrieval and network must be merged to address the scalability concerns. Lets me give you a practical example, I mentioned in the previous post the project of my friend, which was to create a scientific network for Saudi students from different disciplines particularly who are studying in the United kingdom to discuss some important issues. In addition, two main search engines for jobs and courses were created in order to attract the students to join the network. However, one of the main problems of the project was the scalability, and as the developer said the solution for this problem might be either using lots of hardware resources, which cost money, or improving the design of the information retrieval systems to face the rapid increase in data and information.
I asked my self this question when a friend of mine had asked me to help him with a new project. The project was to design a platform for a new social network for saudi students. This social network aims to bring students from different disciplines to discuss some common problems facing our Saudi societies. The main challenges of the project were to find a sponsor and how to convince students from different academic backgrounds to be active in this network. Therefore, a proper team need to be created, and among them must be a computer specialist or a programer and information architect.
The first logical questions in the first meeting were whether we need to make a new platform for this website or to use a ready once, and what the best language for social networks should we use if we will make our own platform. the majority of the team agreed to design this social network from scratch. However, through the meeting, many questions have jumped in my head such as what are the main similarities and differences between a computer specialist or programer and information architect, and how as librarians will be able to contribute effectively to this project. I must say, I am lucky to study this course as it will provide me with the answers of my questions.
When I have looked at the revolution of computer and internet over the time, and their influences on designing different websites and the way of structuring and displaying information on these website, I realised the importance of the knowledge of information architecture in this modern time. Nowadays, universities, companies, public libraries and other societies form different specialities are in competition with each others to design their website in a unique way in order to display their information effectively.
Here is my new blog, everyone is welcome. jalshehridita14.wordpress.com #DITA #citylis