On 3rd November, 1993, ESIS announced its Homepage on the World Wide Web (WWW) to the user community. Ever since then, ESIS has steadily increased its Web support to the astronomical community to include a bibliographic service, the ESIS catalogue documentation and the ESIS Data Browser. More functionality will be added in the near future. All these services share a common ESIS structure that is used by other ESIS user paradigms such as the ESIS Graphical User Interface (Giommi and Ansari, 1993), and the ESIS Command Line Interface.
A forms-based paradigm, each ESIS-Web application interfaces to the hypertext transfer protocol (http) translating queries from/to the hypertext markup language (html) format understood by the NCSA Mosaic interface. In this paper, we discuss the ESIS system and show how each ESIS service works on the World Wide Web client.
The Space Telescope Science Institute (STScI) makes available a wide variety of information concerning the Hubble Space Telescope (HST) via the Space Telescope Electronic Information Service (STEIS). STEIS is accessible via anonymous ftp, gopher, WAIS, and WWW. The information on STEIS includes how to propose for time on the HST, the current status of HST, reports on the scientific instruments, the observing schedule, data reduction software, calibration files, and a set of publicly available images in JPEG, GIF and TIFF format. STEIS serves both the astronomical community as well as the larger Internet community. WWW is currently the most widely used interface to STEIS. Future developments on STEIS are expected to include larger amounts of hypertext, especially HST images and educational material of interest to students, educators, and the general public, and the ability to query proposal status.
Since the introduction in 1993 of the World Wide Web and the associated multimedia technologies numerous projects are underway introducing the new tool into introductory physics teaching. This paper will describe two such undertakings: The Cockpit Physics project at the United States Air Force Academy and the WebPhysics project at Indiana University Purdue University at Indianapolis.
TechTools™ is a professional development program for science and mathematics teachers for purposes of promoting a constructivist pedagogy with modern technologies: probeware, image processing, multimedia, e-mail, and the WWW. We report preliminary results on (1) changes in teachers' use of technology tools, classroom pedagogy and attitude, and (2) systemic ingredients which are catalytic and inhibitors to the technology reformation necessary in the educational system.
The students in an introductory physics class for non-science majors are not likely to have sophisticated computer skills. However, many of them have become Web surfers and can be enticed to use the Web as part of their study of physics. Our initial efforts have focused on using the Web to improve courses for elementary and secondary education majors. Materials available to the students include both academic and administrative activities. Students may have access to items such as the course syllabus and up-to-date information on their grades. On the academic side, practice tests, textbook pages, and some interactive programs are available. In developing these materials we have incorporated a variety of techniques including HTML files, C programs, the Macromedia Director Shockwave plug-in, and the Adobe Acrobat plug-in. In all of the efforts we have attempted to minimize the amount of time spent on software development.
On-site wastewater treatment facilities (WWTFs) collect, treat, and dispose wastewater from dwellings that are not connected to municipal wastewater collection and treatment systems. They serve about 25% of the total population in the United States from an estimated 26 million homes, businesses, and recreational facilities nationwide. There is currently no adequate coordinated information management system for on-site WWTFs. Given the increasing concern about environmental contamination and its effect on public health, it is necessary to provide a more adequate management tool for on-site WWTFs information. This paper presents the development of an integrated, GIS-based, on-site wastewater information management system, which includes three components: (1) a mobile GIS for field data collection; (2) a World Wide Web (WWW) interface for electronic submission of individual WWTF information to a centralized GIS database in a state department of public health or state environmental protection agency; and (3) a GIS for the display and management of on-site WWTFs information, along with other spatial information such as land use, soil types, streams, and topography. It is anticipated that this GIS-based on-site wastewater information management system will provide environmental protection agencies and public health organizations with a spatial framework for managing on-site WWTFs and assessing the risks related to surface discharges.
The purpose of software redocumention is to recover comprehension of software and to record it for future use. This paper describes Partitioned Annotations of Software (PAS), where comprehension is recorded in hypertext and browsed by web browsers. The annotations for each code component are partitioned in order to keep different explanations separate, leverage the advantages of hypertext, and better support the processes of program comprehension. The paper describes a tool that parses code and generates PAS skeletons. The paper also describes a process of incremental redocumentation where comprehension of software is recorded incrementally during normal maintenance. The experience with PAS in an industrial project is summarized.
The World Wide Web has been widely accepted as a viable communication infrastructure to support collaborative activities on computer networks. While cooperating objects of different roles can easily and freely communicate knowledge on the web, the web site managers/developers must write programs to manage the communication behavior in collaborative activities. However, the current hypertext model for the web concentrates on the static structure of hypertext. Few conceptual specifications are capable of effectively integrating the hypertext model with activity dynamics to clarify the dynamic interaction and constraints of desired collaborative activities on the web. Furthermore, decision-makers must observe communication behavior on the web to adapt collaborative activities. Although web servers register each web access in a web log, up to now, only a few query or report mechanisms have been available to obtain required information from the web log. This study presents a specification to capture the static and dynamic structure of intended collaborative activities, and a query mechanism to obtain required information from the web log. The specification and query mechanism make it possible to construct a web site that will provide group activity space and flexibly interpret roles, encourage individuals to commit to responsibilities, and enable activities to be observed.
Healthcare information contained on the World Wide Web is not screened or regulated and claims may be unsubstantiated and misleading. The objective of this study was to evaluate the nature and quality of information on the Web in relation to hand surgery. Three search engines were assessed for information on three hand operations: carpal tunnel decompression, Dupuytren's release and trigger finger release. Websites were classified and evaluated for completeness, accuracy, accountability and reference to a reliable source of information. A total of 172 websites were examined. Although 85% contained accurate information, in 65% this information was incomplete. Eighty-seven per cent of websites were accountable for the information presented, but only 24% made references to reliable sources. Until an organised approach to website control is established, it is important for hand surgeons to emphasise to their patients that not everything they read is complete or accurate. Publicising sites known to be of high quality will promote safe browsing of the Web.
In this paper, we present a new ranking algorithm and an intelligent Web search system using data mining techniques to search and analyze Web documents in a more flexible and effective way. Our method takes advantage of the characteristics of Web documents to extract, find, and rank data in a more meaningful manner. We utilize hyperlink structures with Web document content to intelligently rank the retrieved results. It can solve ranking problems of existing algorithms for multi-frame Web documents and unrelated linked documents. In addition, we use domain specific ontologies to improve our query process and to rank retrieved Web documents with better semantic notion. Furthermore, we use association rule mining to find the patterns of maximal keyword sets, which represent the main characteristics of the retrieved documents. For subsequent queries, these keywords become recommended sets of query terms for users' specific needs. Clustering is used to group retrieved documents into distinct sets that can help users make their decisions easier and faster. Experimental results show that our Web search system is indeed effective and efficient.
The next-generation Web will increase the need for a highly organized and ever evolving method to store references to Web objects. These requirements could be realized by the development of a new bookmark structure. This paper endeavors to identify the key requirements of such a bookmark, specifically in relation to Web documents, and sets out a suggested design through which these needs may be accomplished. A prototype developed offers such features as the sharing of bookmarks between users and groups of users. Bookmarks for Web documents in this prototype allow more specific information to be stored such as: URL, the document type, the document title, keywords, a summary, user annotations, date added, date last visited and date last modified. Individuals may access the service from anywhere on the Internet, as long as they have a Java-enabled Web browser.
This paper presents N-Site, a distributed consensus building and negotiation support system, which is used to provide geographically dispersed teams with agile access to a Web-based group decision support system. Four teams located in France, Mexico, the Ukraine, and the United States participated in the N-Site project. Each team was required to research the problem using the World Wide Web (WWW). With this background, each team identified opportunities, threats and alternatives as a basis for developing a response to the Cuban Missile Crisis that confronted President Kennedy in October 1962. The strategic assessment model (SAM) (M. Tavana, J. Multi-Criteria Decision Anal.11 (2002) 75–96; M. Tavana and S. Banerjee, Decision Sci.26 (1995) 119–143.) was used by each team to choose a strategy that best fit the team's perspective. SAM and WWW enabled the teams to evaluate strategic alternatives and build consensus based on a series of intuitive and analytical methods including environmental scanning, the analytic hierarchy process (AHP) and subjective probabilities. The WWW was used to achieve interaction among the international teams as they attempted to negotiate a decision framework and select a diplomatic response. The project was assessed with a Web-distributed survey instrument. This use of the WWW has implications for international diplomacy as well as global business.
Web mining refers to the process of discovering potentially useful and previously unknown information or knowledge from web data. A graph-based framework is used for classifying Web users based on their navigation patterns. GOLEM is a learning algorithm that uses the example space to restrict the solution search space. In this paper, this algorithm is modified for the graph-based framework. GOLEM is appropriate in this application where the solution search space is very large. An experimental illustration is presented.
Web engines crawl hyperlinks to search for new documents; yet when they index discovered documents they basically revert to conventional information retrieval models and concentrate on the indexing of terms in a single document.
We propose to overcome such limits with an approach based on temporal logic. By modeling a web site as a finite state transition system we are able to define complex and selective queries over hyperlinks with the aid of Computation Tree Logic operators.
We deployed the proposed approach in a prototype system that allows users pose queries in natural language. Queries are automatically translated in Computation Tree Logic, and the answer returned by our system is a set of paths. Experiments carried out with the aid of human experts show improved retrieval effectiveness with respect to current search engines.
Current efforts for teacher education reform in several Asia-Pacific countries suggest that preservice teachers need to develop a better understanding of learning. One way to do this is to have preservice teachers reflect upon themselves as learners in their teacher education classes. In this study, students in a course in an Australian university teacher education program used a World Wide Web site that was designed to incorporate a three-phase reflective framework — analysis, synthesis, and theorizing — to assist them to reflect on their experiences as learners in university classes. The web environment included a database to help the students structure their reflections to collect data and theorize about their experiences. In the third phase of the framework — theorizing — the students developed a metaphor to represent the complexity of their own learning and labelled it with key learning factors. Most students believed that the design of the website guided them in their reflections and the metaphor enabled them to represent the complexity of their own classroom learning. Students in other Asia-Pacific countries would also be able to use the reflective framework in their teacher education classes and it could be adapted to include other influences such as culture on learning.
The world wide web acts as the dominant tool for data transmission due to access such as data retrieving and data transactions. The retrieval of data from the web is a complex procedure due to the large volume of web domain. The basic uses of the websites are described through web usage mining, which mines the weblog records to identify the pattern of accessing the web pages through the user. The web page prediction assists the web users in finding the plot and obtains the information as to their requirements. Several effective algorithms have been developed to mine association rules that make the predictive model more appropriate for web prediction. They can be commonly revised to ensure the changing feature of web access patterns. The Apriori algorithm involves extracting the recurrent itemset and interrelation rule that learns the relational data is commonly utilized for web page prediction. The Apriori algorithm remains the standard model for deriving the patterns and rules from the datasets in co-operative rule extraction. The Apriori algorithm thus generates large mines associated rules for web page prediction. Hence, to select the best rule, the proposed deer hunting rooster-based chicken swarm optimization algorithm is used by integrating the cockerel search agents’ dominating social search creatures’ hunting habits and their traits of looking for food. Further, the neural network (NN) is employed in this research for the prediction of web pages with minimum error. The trained NN is a technique of unsupervised learning that analyzes a dataset of input to produce the desired result, in which the effectiveness of the NN is enhanced by optimal tuning of weight by the adaptive deer hunting rooster-based chicken swarm optimization algorithm. The experimental analysis illustrates that the proposed adaptive deer hunting rooster-based chicken swarm optimization frameworks inherit lower error measures such as mean deviation = 139.89 and symmetric mean absolute percentage error==0.45579 for the FIFA dataset. The proposed web page prediction models’ L2 norm and infinity norm are 58.017 and 14, respectively, for the MSNBC_SPMF dataset.
The development of Web applications requires a variety of tasks, some of them involving aesthetic and cognitive aspects. As a consequence, there is a need for appropriate models and methodologies which allow the heterogeneus members of hypermedia projects to effectively communicate and guide them during the development process. In this chapter, we describe some hypermedia models and methodologies proposed for the development of hypermedia applications.
The greatest use of the Internet and new online technologies today is for constructive purposes. However, the use of the same technologies to spread illegal and objectionable content has been increasing dramatically during the last years. Internet users have begun to protect themselves and their wards by using so-called web content filters, which allow access to legitimate content and block access to objectionable and unwanted content. In this paper, we describe the design and anatomy of an open architecture for creating a social filtering service and show how it can be implemented. The system is designed to overcome the deficiencies of today's content rating and filtering systems and frameworks by empowering and involving end users, and to actively support user collaboration and cooperation by using techniques and methodologies with a low cost of participation combined with ease of use. The proposed work improves the quality of information shared by existing collaborative filtering systems by integrating tagging and folksonomy techniques, which we have extended in such a way that relations between document and rating metadata can be better expressed with only a minimal additional amount of required information. Data structures and workflows are optimized for scalability and fast service response times and allow the efficient computation and processing of even very large volumes of data. We show how such a service can be used by designing and implementing two client applications for filtering pornographic web content.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.