write a note on Rule E.F. Coded for SQL

Code Rule E.F. Coded


E.F. Coded was a computer scientist who invented a relational model for database management. Based on the relative model, the relation database was created.

In the context of the 12 coded rules, 13 coded rules were proposed, which were to check DBMS's concept against its respective model.

Code rules really define what quality DBMS needs to become a Ribbing Database Management System (RDBMS).



This rule states that for a system to qualify as RDBMS, it should be able to manage the database completely by the relational capabilities.

Rule 1: Information Rule

All information (including metadata) is presented as tables stored in tables of cells. These rows and columns should be strictly unauthorized.

Rule 2: Guaranteed Access

Each unique piece of data (atomic value) should be: Table name + primary key (rows) + extras (columns).

Rule 3: Neutral of Symmetric Treatment

Null has many meanings, that means missing data, is not applicable or no value. It should be constantly controlled by the primary key should not be null. NULL must deliver the expression null.


Rule 4: Active Online Catalog

The description of the database dictionary (catalog) database must be. Catalogs powered by the same rule as the rest of the database. Similar query language used in the list on application databases.

Rule 5: Powerful Language

There should be a well-defined language to provide all the information in the information. Example: SQL can be accessed in any way other than the File Accessory Table SQL interface, then violation of this rule.

Rule 6: See Updating Rule

See all that is ideologically correct, it should be improved by the system.

Rule 7: Relative level operation

Enter, delete, and perform operations on every level of relationships. It should also be supported to set up operations like union, incision and teens.

Rule 8: Physical Data Freedom

The physical collection of data should not have any effect on the system. If say, some file assistant table has been renamed or moved from one disk to another, it will not affect the application.

Rule 9: Logical Data Independence

The user's view of the data should not change if there is a change in the database's logical structure (table structures). Say if the table is divided into two tables, then a new visual result should be given as attachments for two tables. This rule is the most difficult to satisfy.

Rule 10: Freedom of Integrity

Databases should be able to admit their own integrity instead of using other programs. Keys and check parameters, triggers etc. should be stored in the data dictionary. It also creates independent RDBMS from the front-end.

Rule 11: Distribution Freedom

It should work properly on the database network, regardless of its distribution. This distributes the foundation of the database.

Rule 12: Noncussession rule

If the lower level access is allowed in the system, then it should not be able to spoil or bypass the integrity rule to change the data. This can be achieved by some sort of search or encryption.

Explain in brief validating form field using JavaScript ?

Validating Form Fields Using JavaScript in FrontPage



Validating a submitted user is important as it may contain inappropriate values. So the belief is essential.

Javascript provides you the facility that validates the form on the client side, so processing will be faster than server-side validation. Therefore, most web developers prefer JavaScript form validation.

By javascript, we can validate names, passwords, email, date, mobile number etc. fields.

Basic belief - First of all, this form should ensure that all mandatory fields are filled. It will require just one loop through each field in the form and check for the data.

Data Format Validation - Second, the data that is entered should be checked for the correct form and value. To verify the accuracy of the data, your code contains the correct logic.

JavaScript Example


<html> <body> <script> function validateform(){ var name=document.myform.name.value; var password=document.myform.password.value; if (name==null || name==""){ alert("Name can't be blank"); return false; }else if(password.length<6){ alert("Password must be at least 6 characters long."); return false; } } </script> <body> <form name="myform" method="post" action="valid.php" onsubmit="return validateform()" > Name: <input type="text" name="name"><br/> Password: <input type="password" name="password"><br/> <input type="submit" value="register"> </form> </body> </html>

OutPut 

Explain in brief how pie chart can be drawn in php GD library.

Getting Fancy with Pie Charts

A little boring, but they introduced you to the process of creating images—define the canvas, define the colors, and then draw and fill. 

Use this same sequence of events to expand your scripts to create charts and graphs, using either static or dynamic data for the data points.

Draws a basic pie chart.


<?php 
    //create the canvas 
 $myImage = ImageCreate(150,150); 

 //set up some colors 
 $white = ImageColorAllocate($myImage, 255, 255, 255); 
 $red  = ImageColorAllocate($myImage, 255, 0, 0); 
 $green = ImageColorAllocate($myImage, 0, 255, 0);
 $blue = ImageColorAllocate($myImage, 0, 0, 255);
 
 //draw a pie 
 ImageFilledArc($myImage, 50, 50, 100, 50, 0, 90, $red, IMG_ARC_PIE);
 ImageFilledArc($myImage, 50, 50, 100, 50, 91, 180 , $green, IMG_ARC_PIE);
 ImageFilledArc($myImage, 50, 50, 100, 50, 181, 360 , $blue, IMG_ARC_PIE);
 
 //output the image to the browser
 header ("Content-type: image/jpeg");
 ImageJpeg($myImage);
 
 //clean up after yourself
 ImageDestroy($myImage);
 
?> 

Out put


which has several attributes: 

  1. The image identifier.
  2. The partial ellipse centered at x.
  3. The partial ellipse centered at y.
  4. The partial ellipse width.
  5. The partial ellipse height.
  6. The partial ellipse start point.
  7. The partial ellipse end point.
  8. Color.
  9. Style.

Combine HTML and PHP code on single page explain in brief?

Combine HTML and PHP code on 

single page explain in brief?

PHP in HTML

While creating a complex page, at times you will need to use PHP and HTML to achieve your essential results. 

At first point, this seems complicated, because PHP and HTML are two separate languages, but this is not the case. 

PHP is designed to communicate with HTML and PHP scripts, it can be included in the HTML page without any problems.

Take the following, as an example

<?php echo "The solution to 2 + 2 is "; ?>
<?php echo 2 + 2; ?>
OutPutThe solution to 2 + 2 is 4

There’s really no reason to include two sets of opening and closing tags. You could easily just write
PHP files can contain both static text and executable PHP code. 
Whether or not a specific part of the file is rendered directly as text, or intepreted as PHP code is down to the use of the opening and closing PHP tags. 
Whenever the code is between the <?php and ?> tags, it will be executed as PHP code.
Example
<html>   <head>       <title><?php echo "My Fiest Program";?></title>   </head><body>
<?php echo "<h1>Hello India</h1>"; ?>
</body></html>

Out PutHello India


What is the difference between TCP/IP model and OSI model?

What is the difference between TCP/IP model and OSI model?


OSI-model
TCP-IP-Model




7.  Application layer


4.  Application layer

6.  Presentation layer

5.  Session layer


4.  Transport layer
3.  Transport layer


3.  Network layer
2.  Network layer


2.  Data link layer

1. Physical layer

1.  Physical layer



Level 7 - Application: According to the requirement, this level works with the application software to provide communication functions to check the availability of communication partners and resources to support any data transfer. The Domain Name Service (DNS), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Internet Message Access Protocol (IMAP), Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), Telnet, and Terminal Emulation.

Layer 6 - Presentation: This layer ensures the data is compatible with the communication resources. It ensures compatibility between the data formats on the levels of apps and the lower layers. It handles any data formatting or code conversion as well as data compression and encryption

Layer 5 - Session: Layer 5 Software Handle Authentication and Authorization Functions. It also manages connectivity between both communication devices, establishes a connection, maintains a connection, and closes it in the end. This layer ensures that the data is distributed as well.

Level 4 - Transport: This level provides quality service (QoS) functions and ensures full distribution of data. Data is collected at this level through error correction and similar functions.

Level 3 - Network: Network layer handles packet routing through logical addressing and switching functions.

Layer 2 - Data Link: Unpack data in Layer 2 Operations package and Frames.

Layer 1 - Physical: This layer defines the logic level, data rate, physical medium, and data conversion functions, which make the bitrate of the packet from one device to another.




software engineering Verification and Validation (V&V) ?

 Verification and Validation (V&V) ?



Verification
Validation
Are We Building a System?
Are We Making the Right System?
Verification is the process of evaluating the development phase products to find out whether or not to meet the specified requirements.
Validation is the process of evaluating software at the end of the development process to determine whether software meets the customer expectations and requirements.
The purpose of the test is to ensure that the product is developed according to the requirements and design specifications.
The purpose of the belief is that the product actually meets the needs of the user and check whether the features are correct in the first place.
The following activities include in verification: reviews, meetings and observations.
Validations include the following activities: test like black box test, white box test, grey box test etc.
Verification is carried out by the QA team to check whether the quality software is in accordance with the documentation.
Validation is carried out by testing team.
Execution of code is not comes under Verification.
Execution of code is comes under Validation.
The verification process specifies whether the output is according to inputs.
The validation process describes whether the software is accepted by the user or not.
Verification is carried out before the Validation.
Validation activity is carried out just after the Verification.
The following things are evaluated during testing: plans, requirement specifications, design specifications, codes, test cases etc.
During validation the following items are evaluated: under real-life testing or software.
Cost of errors caught in Verification is less than errors found in Validation.
Cost of errors caught in Validation is more than errors found in Verification.
It is actually a manual check of files like documents and requirements.
It is basically checking out the programs developed based on the requirement specification documents and files.

What do you mean by empirical estimation models? Explain COCOMO model with suitable example?

What do you mean by empirical estimation models? Explain COCOMO model with suitable example?

--------------------------------------------------------------------------------------------------------------------------

Stands for Constructivie Cost Model

As with all estimation models, it requires sizing information and accepts it in three forms: object points, function points, and lines of source code.

Application composition model - Used during the early stages of software engineering when the
following are important

– Prototyping of user interfaces
– Consideration of software and system interaction
– Assessment of performance
– Evaluation of technology maturity

Early design stage model – Used once requirements have been stabilized and basic software architecture has been established

Post-architecture stage model – Used during the construction of the software



Organic, Semidetached and Embedded software projects

  • Organic: A development project can be considered of organic type, if the project deals with developing a 
    well understood application program, the size of the development team is reasonably small, and the 
    team members are experienced in developing similar types of projects.

  • Semidetached: A development project can be considered of semidetached type, if the development 
    consists of a mixture of experienced and inexperienced staff. Team members may have limited 
    experience on related systems but may be unfamiliar with some aspects of the system being developed.

  • Embedded: A development project is considered to be of embedded type, if the software being developed is strongly coupled to complex hardware, or if the stringent regulations on the operational 
    procedures exist.

The basic COCOMO model gives an approximate estimate of the project parameters. The basic COCOMO
estimation model is given by following expressions:

Effort = a1 x (KLOC)a2 PM (person Month)

Time of Development = b1 x (Effort) b2 Months

Where, a1,a2,b1,b2 are constants for each category of software products

Estimation of Effort

Organic: Effort = 2.4 (KLOC) 1.05 PM

Semi-detached: Effort = 3.0 (KLOC) 1.12 PM

Embedded: Effort = 3.6 (KLOC) 1.20 PM

Estimation Time of Development

Organic: Time of Development = 2.5 (Effort) 0.38 Months

Semi-detached: Time of Development = 2.5 (Effort) 0.35 Months

Embedded:Time of Development = 2.5 (Effort) 0.32 Months

Example:

Assume that the size of an organic s/w product has been estimated to be 32,000 lines of source code. Assume that the average salary of software be Rs. 15,000/- month. Determine the effort required to develop the software product and the nominal development time.

Effort= 2.4 x (32) 1.05 = 91 PM

Time of development = 2.5 x (91) 0.38 = 14 months

Cost= 14 x 15,000 = Rs. 2,10,000/-



Differentiate between Black Box Testing and White Box Testing with suitable Example

Differentiate between Black Box Testing and White Box Testing with suitable Example


Black Box Testing
White Box Testing
Black Box Testing is a software testing method in which the internal structure/ design/ implementation of the item being tested is not known to the tester. Tester is mainly concerned with the validation of output rather than how the output is produced (functionality of the item under test is not important from tester's pov).
White Box Testing is a software testing method in which the internal structure/ design/ implementation of the item being tested is known to the tester. Tester validates the internal structure of the item under consideration along with the output.
Which is not necessary in Black Box testing.   
Programming knowledge and implementation knowledge (internal structure and working) is required in White Box testing,
Black box testing is done by the professional testing team and can be done without knowledge of internal coding of the item.
White Box testing is generally done by the programmers who have developed the item or the programmers who have an understanding of the item's internals.
Internal system design is not considered in this type of testing.
Internal software and code working should be known for this type of testing.
It is a software testing method where in testers are not required to know coding or internal structure of the software.
This testing is based on knowledge of the internal logic of an application’s code.
Tests are based on requirements and functionality.
White box testing approach is used in Unit testing which is usually performed by software developers.
Black box testing method relies on testing software with various inputs and validating results against expected output.
White box testing is also known as clear box testing, transparent box testing and glass box testing.
Tests are conducted at the software interface.
It is predicated on close examination of procedural detail.
Black box testing is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of internal paths, structures, or implementation of the software being tested.
White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.


Black-box Testing (functional)
Can you see what’s inside a closed black-box? No, right? Similarly Black-box method treats the AUT as a black-box (no knowledge about its internal structure). Result – We are not bothered about how the internal structure of the application is maintained/changed until the outside functionality is working as expected (as per requirements). Knowing what the application does is more important than knowledge of how it does it. This is the most widely used test method for System & Acceptance tests as it doesn’t require professionals with coding knowledge and also it provides an external perspective of the AUT (as an end-user who have no knowledge of the actual code).
E.g. we are only concerned if user can watch television, change channels & volume, etc.
White-box Testing (structural)
It’s obvious, just reverse the approach. Since it’s a White-box >> we can see what’s in it, i.e. the internal structure, and use that knowledge to expand the coverage to test every possible flow at code-level. For example – Statement coverage, Branch coverage or Path coverage. It requires programming skills and is usually preferred only for Unit & Integration test levels. You can call it by different names – Clear-box, Glass-box or Transparent-box as far as you can see the internal contents of the box :-).
E.g. we are concerned if the internal circuit for the television is designed correctly.