Friday, February 20, 2009

VBScript

VBScript is a Microsoft proprietary interpreted scripting language whose goals and operation are virtually identical to those of JavaScript/JScript. VBScript, however, has syntax more like Visual Basic than Java. It is interpreted directly from source code and permits scripting within an HTML document. As with JavaScript/JScript, VBScript can be executed within the browser or at the server before the document is sent to the browser.

VBScript is procedural languages and so uses subroutines as the basic unit. VBScript grew out of Visual Basic, a programming languages that has been around for several years. Visual Basic is the basic for scripting languages in the Microsoft Office packages (Word, Access, Excel, and PowerPoint). Visual Basic is component based: a Visual Basic program is built by placing components on to a form and then using the Visual Basic Language to link them together. Visual Basic also gave rise to grandfather of the ActiveX control, the Visual Basic Control (VBX).

VBX shared a common interface that allowed them to be placed on a Visual Basic form. This was one of the first widespread uses of component based software. VBXs gave way to OLE Controls (OCXs), which were renamed ActiveX. When Microsoft took an interest in the Internet, they moved OCX to ActiveX and modeled VBScript after Visual Basic. The main different between Visual Basic and VBScript is that to promote security, VBScript has no functions that interact with files on the user’s machine.

Read More..

JavaScript and JScript

JavaScript and JScript are virtually identical interpreted scripting languages from Netscape and Microsoft. Microsoft’s JScript is a clone of the earlier and widely used JavaScript. Both languages are interpreted directly from the source code and permit scripting within an HTML document. The scripts may be executed within the browser or at the server before the document is sent to the browser. The constructs are the same, except the server side has additional functionally.

JavaScript is an object-based scripting language that has its roots in a joint development program between Netscape and Sun, and has become Netscape’s Web scripting language. It is a very simple programming language that allows HTML pages to include functions and scripts that can recognize and respond to user events such as mouse clicks, user input, and page navigation. These script can help implement complex web page behavior with relatively small amount of programming effort.

The JavaScript language resembles Java, but without Java’s static typing and strong type checking. In contrast to Java’s compile-time system of classes built by declarations. JavaScript supports a runtime system based on a small number of data types representing numeric, Boolean, and String values. JavaScript complements Java by exposing useful properties or Java Applets to script developers. JavaScript statements can get and set exposed properties to query the state or alter the performance of an applet or plug-in.

Comparison of JavaScript and Java applets
Java Script
  • Interpreted (not compiled) by client
  • Object-based, Code uses built-in extensible objects, but no classes or inheritance
  • Code integrated with and embedded in HTML
  • Variable data types not declared (loose typing)
  • Dynamic binding, Object references checked at runtime
  • Cannot automatically write to hard disk
Java (applets)
  • Complied on server before execution on client
  • Object-oriented. Applets consist of object classes with inheritance
  • Applets distinct from HTML (accessed from HTML pages)
  • Variable data types must be declared (strong typing)
  • Static binding. Object references must exist at compile-time
  • Cannot automatically write to hard disk

Read More..

Advantages and Disadvantages of the Web-DBMS Approach

The web as a platform for database system can deliver innovative solutions for both inter-and intra-company business issues. Unfortunately, there are also disadvantages associated with this approach.

ADVANTAGES
  1. Advantages that come through the use of a DBMS
  2. Simplicity
  3. In its original form, HTML as a markup language was easy for both developer and naïve end-users to learn.
  4. Platform Independence
  5. Graphical User Interface
  6. Standardization
HTML is a de facto standard to which all web browser adhere, allowing an HTML document on one machine to be read by users on any machine in the world with an internet connection and web browser, using HTML, developers learn a single language and end-users use a single GUI.
6) Cross-platform support
7) Transparent network access
8) Scalable deployment
9) Innovation
As an internet platform the web enables organizations to provide new services and reach new customers through globally accessible applications. Such benefit were not previously available with host-based or traditional client-server and groupware applications.

DISADVANTAGES

1) Reliability
The internet is currently an unreliable and slow communication medium-when a request is carried across the internet, there is no real guarantee of delivery.

2) Security
Security is of great concern for an organization that makes its database accessible on the web. User authentication and secure data transmission are critical because of the large number of potentially anonymous users.

3) Cost
Cost of maintenance is very expensive.

4) Scalability
Web application can face unpredictable and potentially enormous peak loads. This requires the development of a high performance server architecture that is highly scalable. To improve scalability, web farms have been introduced with two or more servers hosting the same site. HTTP requests are usually routed to each server in the farm in a round-robin fashion, to distribute load and allow the site to handle more requests. However, this can make maintaining state information more complex.

5) Limited functionally of HTML

6) Statelessness
The statelessness of the web environment makes the management of database connections and user transaction difficult requirement application to maintain additional information.

7) Bandwidth

8) Performance
Many parts of complex web database clients center around interpreted languages, making them slower than the traditional database clients, which are natively complied.

9) Immaturity of development tools.

Read More..

Web-DBMS Architecture

1) TRADITIONAL TWO-TIER CLIENT-SERVER ARCHITECTURE

Data-intensive business applications consist of four major components : the database, the transaction logic, the application logic, and the user interface. In the environment, these components were all in one place, as would expected in highly centralized business environment.
Figure 1 The Traditional two-tier client-server architecture

To accommodate an increasingly decentralized business environment , the two client-server system was developed. The traditional two-tier client-server architecture provides a basic separation of tasks.

The client (tier 1) is primarily responsible for the presentation of data to the user.
The server (tier 2) is primarily responsible for supplying data services to the client, as figure 1.

Presentation services handle user interface actions and the main business application logic. Data services provide limited business application logic, typically validation that the client is unable to carry out due to lack of information and access to the requested data independent of its location. The data can come from relational DBMSs, object-relational DBMSs, object-oriented DBMSs, legacy DBMSs, or proprietary data access system. Typically, the client would run on end-user desktops and interact with a centralized database server over a network.


2) TREE TIER ARCHITECTURE

The need for enterprise scalability challenged this traditional two-tier client-server model. In the mid-1990s, as applications became more complex and potentially could be deployed to hundreds or thousands of end-users, the client side presented two problems that prevented true scalability :
  • A ‘fat’ client, requiring considerable resources on the client’s computer to run effectively. This includes disk space, RAM and CPU power.
  • A significant client-server administration overhead.

By 1995, a new variation of the traditional two-tier client-server model appeared to solve the problem of enterprise scalability. This new architecture proposed three layers, each potentially running on a different platform :
  1. The user interface layer, which run on the end-user’s computer (the client).
  2. the business logic and data processing layer. This middle tier runs on a server and is often called the application server.
  3. A DBMS, which mores the data required by the middle tier. This tier may run on a separate server called the database server.
Figure 2 The Three tier architecture

As illustrated in figure 2, the client is now responsible only for the application’s user interface and perhaps performing same simple logic processing, such as input validation theory providing a ‘thin’ client. The core business logic of the application now resides in its own layer, physically connected to the client and database server over a local area network (LAN) or wide area network (WAN). One application server is designed to server multiple clients.

The three tier design has many advantages over traditional two-tier or single-tier designs which include :
  1. The need for less expensive hardware because the client is ‘thin’.
  2. Application maintenance is centralized with the transfer of the business logic for many end-users into a single application server. This eliminates the concerns of software distribution that are problematic in the traditional two-tier client-server model.
  3. the added modularity makes it easier to modify or replace one tier without affecting the other tiers.
  4. load balancing is easier with the separation of the core business logic from the database function.

An additional advantages is that the three-tier architecture maps quite naturally to the Web environment, with a Web browser acting as the ‘thin’ client, and a web server acting as the application server. The three-tier architecture can be extended to n-tiers, with additional tiers added to provide more flexible and scalability.

Read More..

Requirements for Web-DBMS Integration

While many DBMS vendors are working to provide proprietary database connectivity solutions for the Web, most organizations require a more general solution to prevent them from being tied into one technology. In this section, we briefly list some of the most important requirements for the integration of database application, with the web. These requirements are ideals and not fully achievable at the present time, and some may need to be traded-off against other. The requirements are as follows :

  • The ability to access valuable corporate data in the secure manner.
  • Data and vendor independent connectivity to allow freedom of choice in the selection of the DBMS now and in the future.
  • The ability to interface to the database independent of any proprietary Web browser or Web server.
  • A connectivity solution that takes advantage of all the features of an organization’s DBMS.
  • An open-architecture approach to allow interoperability with a variety of system and technologies.
  • A cost-effective solution that allows for scalability, growth, and changes in strategic directions, and helps reduce the costs of developing and maintaining applications.
  • Support for transactions that span multiple HTTP requests.
  • Support for session and application-based authentication.
  • Acceptable performance.
  • Minimal administration overhead.
  • A set of high-level productivity tools to allow applications to be developed and deploy with relative case and speed.

Read More..

Thursday, February 19, 2009

Uniform Resource Locators (URL)

URL ( is a string of alphanumeric characters that represents that represents the location or address of a resource on the internet and how that resource should be accessed.)

Uniform resource Locators (URLs) define uniquely where documents (resource) can be found on the internet. Other related terms that may be encountered are URLs And URNs. Uniform Resource Identifiers (URLs) are the generic set of all names/addresses that refer to internet resources. Uniform Resource Names (URNs) also designate a resource on the internet, but do so using a persistent, location-independent name. URNs are very general and rely on name lookup services and are therefore dependent on additional services that are not always generally available. URls on the other hand, identify a resource on the internet using a scheme based on the resource’s location. URLs are the most commonly used identification scheme and are basis for HTTP and the web.

The syntax of a URL is quite simple and consists of three basic parts: the protocol used for the connection, the host name, and the part name on the host where the resource can be found. In additional, the URL can optionally specify the port through which the connection to the host should be made (default 80 for HTTP), and a query string, which is one of the primary methods for passing data from the client to the server (for example, to a CGI script). The syntax of URL is a follows:

://[:]/absolute_path [? Arguments]

The specifies the mechanism to be used by the browser to communicate with the resource. Common access methods are HTTP, S-HTTP (secure HTTP), file (load file from a local disk), FTP, mailto (send mail to specified mail address), Gopher, NNTP, and Telnet, for example :

http://www.systembisnis.com/?id=fianulfi

is a URL that identifies the general home page for HTML information at Systembisnis. The protocol is HTTP, the host is www.systembisnis.com, and the virtual path of the HTML file is /?id=fianulfi.

Read More..

HyperText Markup Language (HTML)

HTML (the document formatting language used to design most Web pages)

The HyperText Markup Language (HTML) is a system for marking up, or tagging, a document so that it can be published on the Web. HTML defines what is generally transmitted between nodes in the network. It is a simple, yet powerful , platform-independent document language (Berners-Lee and Connolly, 1993). HTML was originally developed by Tim Berners-Lee while at CERN but was standardized in November 1995 as the IETF (Internet Engineering Task Force) RFC 1866, commonly referred to as HTML version 2. the language has evolved and the World Wide Web Consortium (W3C) currently recommends use of HTML 4.01, which has mechanisms for frames, style sheets, scripting and embedded objects (W3C, 1999). In early 2000, W3C produced XHTML 1.0 (eXtensible HyperText Markup Language) as a reformulation of HTML 4 in XML (eXtensible Markup Language) (W3C, 2000).

HTML has been developed with the intention that various types of devices should be able to use information an the Web : PCs with graphics displays of varying resolution and color depths, cellular telephones, hand-held devices, devices for speech for input and output, and so on.

HTML is an application of the Standardized Generalized Markup Language (SGML), a system for defining structured document types and markup languages to represent instances of those document types (ISO, 1986). HTML is one such markup language.

Read More..

Friday, February 13, 2009

HyperText Transfer Protocol..

HTTP ( is the protocol used to transfer Web pages through the Internet )

The HyperText Transfer Protocol (HTTP) defines how clients and servers communicate. HTTP is generic object-oriented, stateless protocol to transmit information between servers and clients (Bernets-Lee 1992). HTTP/0.9 was used during the early development of the Web. HTTP/1.0 which was released in 1995 as informational RFC 1945, reflected common usage of the protocol (Berners-Lee et al., 1996). The most recent release. HTTP/1.1 provides more functionality and support for allowing multiple transactions to occur between client and server over the same request.

HTTP is based on a request-response paradigm. An HTTP transaction consists of the following stages.

*) Connection – the client establishes a connection with the Web server.
*) Request – the client sends a request message to the Web server.
*) Response – the Web server sends a response to the client.
*) Close – the connection is closed by Web server.

HTTP is currently a stateless protocol – the server retains no information between requests. Thus a Web server has no memory of previous requests, this means that the information a user enters on one page is not automatically available on the next page requested, unless the Web server takes steps to make that happen, in which case the server must somehow identify which requests, out of the thousands of requests it receives, come from the same user. For most applications, this stateless property of HTTP is a benefit that permits clients and servers to be written with simple logic and run ‘lean’with no extra memory or disk space taken up with information from old requests. Unfortunately the stateless property of HTTP makes it difficult to support the concept of the session that as essential to basic DBMS transactions. Various schemes have been proposed to compensate for the stateless nature of HTTP, such as returning Web pages with hidden fields containing transaction identifiers, and using Web page forms where all the information is entered locally and then submitted as a single transaction. All these schemes are limited in the types of application they support and require special extensions to the Web servers.

HTTP request..

An HTTP request consists of a header indicating the type of request, the name of a resource, the HTTP version, followed by an optional body. The header is separated from the body by a blank line. The main HTTP request types are :

*) GET (This is one of the most common types of request, which retrieves (gets) the resource the user has requested.)
*) POST (Another common type of request, which transfers (posts) data to the specified resource. Usually the data sent comes from an HTML from that the user had filled in, and the server may use this data to search the internet or query a database.)
*) HEAD ( Similar to GET but forces the server to return only un HTTP header instead of response data.)
*) PUT (HTTP/1.1) Uploads the resource to the server.
*) DELETE (HTTP/1.1) Deletes the resource from the server.
*) OPTIONS (HTTP/1.1) Requests the server’s configuration options.


HTTP response..

An HTTP response has a header containing the HTTP version, the status of the response, and header information to control the response behavior, as well as any requested data in a response body. Again, the header is separated from the body by the blank line.

Read More..

The Web..

The World Wide Web (WWW) is a hypermedia-based system that provides a means of browsing information on the internet in a non-sequential way using hyperlinks.

The World Wide Web (Web for the short) provides a simple ‘point and click’ means of exploring the immense volume of pages of information residing on the internet (Berners-Lee, 1992; Berners-Lee et al..1994). Information on the Web is presented on Web pages. Which appear as a collection of text, graphics, pictures, sound and video. In addition a Web page can contain hyperlinks to other Web pages, which allow users to navigate in a non-sequential way through information.

Much of the Web’s success is due to the simplicity with which it allows users to provide, use, and refer to information distributed geographically around the world. Furthermore, it provides users with the ability to browse multimedia documents independently of the computer hardware being used. It is also compatible with other exiting data communication protocols, such as Gopher, FTP (File Transfer Protocol), NNTP (Network News Transfer Protocol), and Telnet (for remote login sessions).

The Web consists of a network of computers that can act in two roles: as servers, providing information; and as clients, usually referred to as browsers, requesting information.

Much of the information on the Web is stored in documents using a language called HTML (Hypertext Markup Language), and browsers must understand and interpret HTML to display this documents. The protocol that governs the exchange of information between the Web server and the browser is called HTTP (HyperText Transfer Protocol). Documents and locations within documents are identified by an address, defined as a Uniform Resource Locator (URL).

Read More..

e-Commerce and e-Business…

e-Commerce ( customers can place and pay for orders via the business Web site )

Businesses at this stage are not only using their Web site as a dynamic brochure but they also allow customers to make procurements from the Web site, and may even be providing service and support online as well. This would usually involve some form of secure transaction using one of the technologies discussed. This allows the business to trade 24 hours a day, every day of the year, thereby potentially increasing sales opportunities, reducing the cost of sales and service, and achieving improved customer satisfaction.

e-Business (complete integration of internet technology into the economic infrastructure of the business )

Businesses at this stage have embraced internet technology through many parts of their business. Internet and external processes are managed through intranets and extranets; sales, service and promotion are all based around the Web. Among the potential advantages, the business achieves faster communication streamlined and more efficient processes, and improved productivity.

Read More..

Intranet and Extranet..

Intranet is a Web site or group of sites belonging to an organization, accessible only by the members of the organization.

Internet standards for exchanging e-mail and publishing Web pages are becoming increasingly popular for business use within closed networks called intranets. Typically, an internet is connected to the winder public internet through a firewall. With restrictions imposed on the types of information that can pass into and out of the internet.

Extranet is an internet that is partially accessible to authorized outsiders.

Whereas an internet resides behind a firewall and is accessible only to people who are members of the same organization, an extranet provides various levels of accessibility to outsiders. Typically, an extranet can be accessed only if the outsider has a valid username and password, and this identity determines which parts of the extranet can be viewed. Extranets are becoming a very popular means for business partners to exchange information.

In contrast, implementing an extranet is relatively simple. It uses standard internet components : a Web server, a browser or applet-based application, and the internet itself as a communications infrastructure. In addition, the extranet allows organizations to provide information about themselves as product for their customers.

Read More..

Monday, February 9, 2009

Internet.. (Definition & History)

Internet is a worldwide collection of interconnected computer networks.
Global computer network that made from local computer network and regional, to enable shared of data communication between many computer that cconnected in the network.
The internet is made up of many separate but interconnected network belonging to commercial, educational and government organizations, and Internet Service Providers (ISPs).

The services offered on the internet include electronic mail (e-mail), conferencing and chat services as well as the ability to access remote computers and send and receive files. It began in the late 1960s and early 1970s as an experimental US Department of Defense project called ARPANET ( Advanced Research Projects Agency NETwork) investigating how to build networks that could withstand partial outages like nuclear bomb attacks) and still survive.

Some university in USA, between UCLA, Standford, UC Santa Barbara and University of Utah, requested cooperation to do this project and the beginning is already successful to connected 4 computers in university location that different mentioned aboved.

In 1977, ARPANET already to connected more of 100 computers mainframe and now around 4 million host found in network that already connected. In the real total of computers that already connected can not be certain discovered, because total of computers development that already connected with some network is more big.

In 1982 TCP/IP (Transmisson Control Protocol and Internet Protocol ) was adopted as the standard communications protocols for ARPANET. TCP is responsible for ensuring correct delivery of messages that move from one computer to another. IP manages the sending and receiving of packets of data between machines, based on a tour-byte destination address (the IP number), which is assigned to an organization by the internet authorities. The term TCP/IP sometimes refers to the entire Internet suite of protocols that are commonly run on TCP/IP. Such as FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), Telnet (Telecomunication Network), DNS (Domain Name Service), POP (Post Office Protocol), and so forth.

In the process of developing this technology, the military forged strong links with large corporations and universities. As a result, responsibility for the continuing research shifted to the National Science Foundation (NSF) and in 1986, NSFNET (National Science Foundation NETwork) was created, forming the new backbone of the network. Under the aegis of the NSF the network became known as the Internet. However, NSFNET itself ceased to form the internet backbone in 1995, and a fully commercial system of backbones has been created in this place. The current Internet has been likened to an electronic city with virtual libraries, storefonts, business offices, art galleries, and so on.

Another terms that is popular, particularly with the media is the ‘information superhighway’. This is a metaphor for the future worldwide network that will provide connectivity. Access to information and online services for user around the world. The term was first used in 1993 by the then US Vice President AI Gore in speech outlining plans to built a high-speed national data communications network of which the Internet is a prototype. The internet began with finding from the US NSF as a means to allow American universities to share the resources of five national supercomputing canters.

Read More..
 
Template by Administrator Frelia | Anak SD | Blogger