Project Deliverable 3: Database and Data Warehousing Design

Due Week 6 and worth 120 points This assignment consists of two (2) sections: a design document and a revised project plan. You must submit both sections as separate files for the completion of this assignment. Label each file name according to the section of the assignment it is written for. Additionally, you may create and / or assume all necessary assumptions needed for the completion of this assignment. One of the main functions of any business is to transform data into information. The use of relational databases and data warehousing has gained recognition as a standard for organizations. A quality database design makes the flow of data seamless. The database schema is the foundation of the relational database. The schema defines the tables, fields, relationships, views, indexes, and other elements. The schema should be created by envisioning the business, processes, and workflow of the company. Since your company is an innovative Internet-based company, movement toward data warehousing seems to be one of the most viable options to give your company a competitive advantage; however, these concepts must be explained to the executive board in a manner to garner support.   Section 1: Design Document Write a six to ten (6-10) page design document in which you:

  1. Support the need for the use of relational databases and data warehousing. From a management standpoint, it may be important to show the efficiencies that can be gained for executive oversight.
  2. Create a database schema that supports the company’s business and processes. Explain and support the database schema with relevant arguments that support the rationale for the structure. Note: The minimum requirement for the schema should entail the tables, fields, relationships, views, and indexes.
  3. Identify and create database tables with appropriate field-naming conventions. Then, identify primary keys and foreign keys, and explain how referential integrity will be achieved. Normalize the database tables to third normal form (3NF).
  4. Identify and create an Entity-Relationship (E-R) Diagram relating the tables of the database schema through the use of graphical tools in Microsoft Visio or an open source alternative such as Dia. Note: The graphically depicted solution is not included in the required page length but must be included in the design document appendix. Explain your rationale behind the design of the E-R Diagram.
  5. Identify and create a Data Flow Diagram (DFD) relating the tables of your database schema through the use of graphical tools in Microsoft Visio or an open source alternative such as Dia. Note: The graphically depicted solution is not included in the required page length but must be included in the design document appendix. Explain the rationale behind the design of your DFD.
  6. Illustrate the flow of data including both inputs and outputs for the use of a data warehouse. The diagram should map data between source systems, operational systems, data warehouses and specified data marts. Note: The graphically depicted solution is not included in the required page length.

Your assignment must follow these formatting requirements:

  • Be typed, double spaced, using Times New Roman font (size 12), with one-inch margins on all sides; citations and references must follow APA or school-specific format. Check with your professor for any additional instructions.
  • Include a cover page containing the title of the assignment, the student’s name, the professor’s name, the course title, and the date. The cover page and the reference page are not included in the required assignment page length.
  • Include charts or diagrams created in MS Visio or Dia as an appendix of the design document. All references to these diagrams must be included in the body of the design document.

Running Head: PROJECT DELIVERABLE 3-DATABASE AND DATA WAREHOUSING DESIGN 1

PROJECT DELIVERABLE 3-DATABASE AND DATA WAREHOUSING DESIGN 17

Project Deliverable 3- Database and Data Warehousing Design

CIS 599 Graduate Info Systems Capstone

Abstract

I have first completed project plan inception with introduction and then finished Business requirement document. Now In this project deliverable I will explain database and database warehouse design for my international merging project. One of the main functions of any business is to transform data into information. The use of relational databases and data warehousing has gained recognition as a standard for organizations. A quality database design makes the flow of data seamless. The database schema is the foundation of the relational database. The schema defines the tables, fields, relationships, views, indexes, and other elements. Envisioning the business, processes, and workflow of the company should create the schema. Since your company is an innovative Internet-based company, movement toward data warehousing seems to be one of the most viable options to give your company a competitive advantage; however, these concepts must be explained to the executive board in a manner to garner support.

In this Assignment first section I will first explain what are the Support need for the use of

Relational databases and data warehousing and then I will create a database schema that supports the company’s business and processes. In the database schema I will elaborate relevant arguments that support the rationale for the structure. In the same section I will Identify and create database tables with appropriate field-naming conventions. Then, identify primary keys and foreign keys, and explain how referential integrity will be achieved. Normalize the database tables to third normal form (3NF).

In the Second section I will Identify and create an Entity-Relationship (E-R) Diagram relating the tables of the database schema through the use of graphical tools in Microsoft Visio. Then Identify and create a Data Flow Diagram (DFD) relating the tables.

In the last segment I will illustrate the flow of data including both inputs and outputs for the use of a data warehouse. The diagram will map data between source systems, operational systems, data warehouses and specified data marts. I will separately attached Revised Project plan task .

Need for the use of relational databases and data warehousing

A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. It separates analysis workload from transaction workload and enables an organization to consolidate data from several sources. In addition to a relational database, a data warehouse environment includes an extraction, transportation, transformation, and loading (ETL) solution, an online analytical processing (OLAP) engine, client analysis tools, and other applications that manage the process of gathering data and delivering it to business users.

A common way of introducing data warehousing is to refer to the characteristics of a data warehouse.

Subject Oriented-Data warehouses are designed to help you analyze data. For example, to learn more about your company’s sales data, you can build a warehouse that concentrates on sales. Using this warehouse, you can answer questions like “Who was our best customer for this item last year?” This ability to define a data warehouse by subject matter, sales in this case, makes the data warehouse subject oriented.

Integrated-Integration is closely related to subject orientation. Data warehouses must put data from disparate sources into a consistent format. They must resolve such problems as naming conflicts and inconsistencies among units of measure. When they achieve this, they are said to be integrated.

Nonvolatile-Nonvolatile means that, once entered into the warehouse, data should not change. This is logical because the purpose of a warehouse is to enable you to analyze what has occurred.

Time Variant-In order to discover trends in business, analysts needs large amounts of data. This is very much in contrast to online transaction processing (OLTP) systems, where performance requirements demand that historical data be moved to an archive. A data warehouse’s focus on change over time is what is meant by the term time variant.

With a relational database, you can quickly compare information because of the arrangement of data in columns. The relational database model takes advantage of this uniformity to build completely new tables out of required information from existing tables. In other words, it uses the relationship of similar data to increase the speed and versatility of the database. The “relational” part of the name comes into play because of mathematical relations. A typical relational database has anywhere from 10 to more than 1,000 tables. Each table contains a column or columns that other tables can key on to gather information from that table. By storing this information in another table, the database can create a single small table with the locations that can then be used for a variety of purposes by other tables in the database. A typical large database, like the one a big Web site, such as Amazon would have, will contain hundreds or thousands of tables like this all used together to quickly find the exact information needed at any given time. Relational databases are created using a special computer language, structured query language (SQL) that is the standard for database interoperability. SQL is the foundation for all of the popular database applications available today, from Access to Oracle

A number of new concepts and tools have evolved and incorporated into a new technology called Data Warehousing. Simply put, A data warehouse is a storage facility to store extremely large amount of information by an organization. It is a relational database which is specifically designed for query and analysis processing instead of transaction processing. It is a well-organized and well-structured and resourceful method of organizing, managing and reporting data which are otherwise non-uniform and scattered throughout the organization in different systems. The prominent features of a data warehouse are that it enables recording, collecting and filtering of data to different systems at higher levels. Normally it contains historical data which are derived from transactional data, but it can also include data from other sources as well. It helps an organization consolidate data from several sources by separating analysis workload from transactional workload. Additionally, a data warehouse environment includes ETL, which is Extraction, Transportation and Loading solution, an OLAP which is Online Analytical Processing Engine, analysis tools and other tools so as to look over the process of gathering data and finally delivering it to business users. The data stored in these warehouses must be stored in a way, which is reliable, secure, and easy to process and manage. The need for data warehousing arises as businesses become more complex and start generating and gathering huge amount of data, which were difficult to manage in the traditional way. Create a database schema that supports the company’s business and processes. Explain and support the database schema with relevant arguments that support the rationale for the structure. Note: The minimum requirement for the schema should entail the tables, fields, relationships, views, and indexes.

A database schema of a database system is its structure described in a formal language supported by the database management system (DBMS) and refers to the organization of data as a blueprint of how a database is constructed (divided into database tables in case of Relational Databases). The formal definition of database schema is a set of formulas (sentences) called integrity constraints imposed on a database. These integrity constraints ensure compatibility between parts of the schema. All constraints are expressible in the same language. A database can be considered a structure in realization of the database language. The states of a created conceptual schema are transformed into an explicit mapping, the database schema. This describes how real world entities are modeled in the database.

“A database schema specifies, based on the database administrator’s knowledge of possible applications, the facts that can enter the database, or those of interest to the possible end-users. “The notion of a database schema plays the same role as the notion of theory in predicate calculus. A model of this “theory” closely corresponds to a database, which can be seen at any instant of time as a mathematical object. Thus a schema can contain formulas representing integrity constraints specifically for an application and the constraints specifically for a type of database, all expressed in the same database language. In a relational database, the schema defines the tables, fields, relationships, views, indexes, packages, procedures, functions, queues, triggers, types, sequences, materialized views, synonyms, database links, directories, XML schemas, and other elements.

Database schema

Schema will define the table structure. For the entities we’ll make a table and relationship will defined depending on the type of association between them. 1.Ofiice-it will give its location in the world, phone number, address, and state.

2.Customer-Name, address, code, city, email id, phone

3.Order-this table identifies order id, description, location etc

4.Employee-this will define employee first name, last name, id, etc

5.Payment-Payment type, amount, card id

6.Product- this will relate with id, type, name, quality, price etc.

7.Product line-product line, Text description, image etc

8.Product sale- count, order, count, sale etc.

9.Order Detail-order id, attribute etc

Identify and create database tables with appropriate field-naming conventions. Then, identify primary keys and foreign keys, and explain how referential integrity will be achieved. Normalize the database tables to third normal form (3NF).

OFFICES EMPLOYEE CUSTOMER PAYMENT ORDER ORDER_

DETAIL

PRODUCT PRODUCT

SALE

PROUCT

_LINE

OFFICE

_ID

Emp_Num Cust_ID Cust_ID Order ID Order_ID Product_ID Product_ID Product Line
CITY FirstName Customer

First_name

Check

Number

Created Attribute_

Name

Product

_Name

order TextDescri

piton

Phone_

Number

LastName Customer

Last_name

Payment

Date

Customer Product_ID Product

Time

Count HTML

Description

Address Extension Phone Amount Total Quantity

Ordered

Quantity Price Image
State Email Address   Summary PriceEach Price Discount  
Country Office_ID City   Opinion OrderLine

Number

Condition Optimal

Lock

Filed

 
PostalCode ReportTo State   Cust_ID   Product

Version

GoRecord  
Territory JobTitle Postal_code   GoRecord        
    Country            
    Emp_Num            

Primary Key In each Table Mark with Yellow Color

Foreign Key In Each Table Mark with Green Color

Referential integrity is a property of data which, when satisfied, requires every value of one attribute (column) of a relation (table) to exist as a value of another attribute in a different (or the same) relation (table).For referential integrity to hold in a relational database, any field in a table that is declared a foreign key can contain either a null value, or only values from a parent table’s primary key or a candidate key. In other words, when a foreign key value is used it must reference a valid, existing primary key in the parent table. For instance, deleting a record that contains a value referred to by a foreign key in another table would break referential integrity. Some relational database management systems (RDBMS) can enforce referential integrity, normally either by deleting the foreign key rows as well to maintain integrity, or by returning an error and not performing the delete. Which method is used may be determined by a referential integrity constraint defined in a data dictionary.

3NF states that all column reference in referenced data that are not dependent on the primary key should be removed. Another way of putting this is that only foreign key columns should be used to reference another table, and no other columns from the parent table should exist in the referenced table. Third normal form (3NF) is a database principle that allows you to cleanly organizes your tables by building upon the database normalization principles provided by 1NF and 2NF. Sometimes within an entity we can find that there exists a “key” and “dependent” relationship between a group of non-key attributes. In our example above it is obvious in table 1 that this relationship exists between Tutor Id and Tutor Name. In this case they are removed to form a new table. If we did not perform the 3NF conversion then the course tutor’s details (in this case, Name only) would be repeated each time this tutor’s courses were stored. Here is the process:

Identify any dependencies between non-key attributes within each table

Remove them to form a new table

Promote one of the attributes to be the key of the new table

There are two basic requirements for a database to be in third normal form

-Already meet the requirements of both 1NF and 2NF

-Remove columns that are not fully dependent upon the primary key.

Imagine that we have a table of widget orders that contains the following attributes:

Order Number

Customer Number

Unit Price

Quantity

Total

Remember, our first requirement is that the table must satisfy the requirements of 1NF and 2NF. Are there any duplicative columns? No. Do we have a primary key? Yes, the order number. Therefore, we satisfy the requirements of 1NF. Are there any subsets of data that apply to multiple rows? No, so we also satisfy the requirements of 2NF.

Now, are all of the columns fully dependent upon the primary key? The customer number varies with the order number and it doesn’t appear to depend upon any of the other fields. What about the unit price? This field could be dependent upon the customer number in a situation where we charged each customer a set price. However, looking at the data above, it appears we sometimes charge the same customer different prices. Therefore, the unit price is fully dependent upon the order number. The quantity of items also varies from order to order, so we’re OK there.

What about the total? It looks like we might be in trouble here. Multiplying the unit price by the quantity can derive the total; therefore it’s not fully dependent upon the primary key. We must remove it from the table to comply with the third normal form. Perhaps we use the following attributes:

Order Number

Customer Number

Unit Price

Quantity

Now our table is in 3NF. But, you might ask, what about the total? This is a derived field and it’s best not to store it in the database at all. We can simply compute it “on the fly” when performing database queries. For example, we might have previously used this query to retrieve order numbers and totals:

SELECT OrderNumber, Total

FROM WidgetOrders

We can now use the following query:

SELECT OrderNumber, UnitPrice * Quantity AS Total

FROM WidgetOrders

We can achieve the same results without violating normalization rules.

Identify and create an Entity-Relationship (E-R) Diagram relating the tables of the database schema through the use of graphical tools in Microsoft Visio or an open source alternative such as Dia. Explain your rationale behind the design of the E-R Diagram.

 

image1.emfPRODUCTSALEORDERPRODUCTCUSTOMERSORDER-DETAILSPAYMENTPRODUCT_LINEOFFICESEMPLOYEESProduct_IDFKOrderCountOrder_IDPKCreatedCustomerProduct_IDPKProduct_NameProductLineQuantityPriceConditionProductVendersPriceDiscountOptimisticLockFieldGCRecordTotalSummeryOptimisticLockFieldGCRecordCust_IDPKCustomerFirst_NameCustomerLast_NamePhoneAddressCityStatePosta_codeCountryOrder_IDFKProduct_IDFKQuantityOrderedPriceEachOrderLineNumberCust_IDPKCheckNumberPaymentDateAmountProductLinePKTextDescriptionHTMLDescriptionImageOFFICE_IDPKCityPhone_numberAddressStateCountryPostalCodeTerritoryEmp_NumPKFirstNameLastNameExtentionEmailOffice_IDFKReportToJobTitleEmp_numFKCust_IDFKattribute namePK

 

In software engineering, an entity–relationship model (ER model) is a data model for describing a database in an abstract way. This article refers to the techniques proposed in Peter Chen‘s 1976 paper. However, variants of the idea existed previously, and have been devised subsequently such as super type and subtype data entities and commonality relationships. An ER model is an abstract way of describing a database. In the case of a relational database, which stores data in tables, some of the data in these tables point to data in other tables – for instance, your entry in the database could point to several entries for each of the phone numbers that are yours. The ER model would say that you are an entity, and each phone number is an entity, and the relationship between you and the phone numbers is ‘has a phone number’. Diagrams created to design these entities and relationships are called entity–relationship diagrams or ER diagrams

The objective is to develop a simple system for managing customer purchase orders. First, you must identify the business entities involved and their relationships. To do that, you draw an entity-relationship (E-R) diagram. ER modeling is a data modeling technique used in software engineering to produce a conceptual data model of a information system. Diagrams created using this ER-modeling technique are called Entity-Relationship Diagrams, or ER diagrams or ERDs. So you can say that Entity Relationship Diagrams illustrate the logical structure of databases. Dr. Peter Chen is the originator of the Entity-Relationship Model. His original paper about ER-modeling is one of the most cited papers in the computer software field. Currently the ER model serves as the foundation of many system analysis and design methodologies, computer-aided software engineering (CASE) tools, and repository systems.

Identify and create a Data Flow Diagram (DFD) relating the tables of your database schema through the use of graphical tools in Microsoft Visio or an open source alternative such as Dia.

Note:. Explain the rationale behind the design of your DFD.

 

image2.emfCUSTOMERSEMPLOYEESPAYMENTSSELL OFFICESORDERINGPRODUCTSORDERSINCLUDESPAY PAYMENTSINVOICE STATEMENTHANDLESMANAGES

 

Data Flow Diagram

Data flow diagrams (DFDs) reveal relationships among and between the various components in a program or system. DFDs are an important technique for modeling a system’s high-level detail by showing how input data is transformed to output results through a sequence of functional transformations. DFDs consist of four major components: entities, processes, data stores, and data flows. The symbols used to depict how these components interact in a system are simple and easy to understand; however, there are several DFD models to work from, each having its own symbology. DFD syntax does remain constant by using simple verb and noun constructs. Such a syntactical relationship of

DFDs make them ideal for object-oriented analysis and parsing functional specifications into precise DFDs for the systems analyst (Hispacom Group et al, Aug 1996).

When it comes to conveying how information data flows through systems (and how that data is transformed in the process), data flow diagrams (DFDs) are the method of choice over technical descriptions for three principal reasons.

1. DFDs are easier to understand by technical and nontechnical audiences

2. DFDs can provide a high-level system overview, complete with boundaries and connections to other

Systems

3. DFDs can provide a detailed representation of system components

DFDs help system designers and others during initial analysis stages visualize a current system or one that may be necessary to meet new requirements. Systems analysts prefer working with DFDs, particularly when they require a clear understanding of the boundary between existing systems and postulated systems. DFDs represent the following:

1. External devices sending and receiving data

2. Processes that change that data

3. Data flows

4. Data storage locations

The most important thing to remember is that there are no hard and fast rules when it comes to producing DFDs, but there are when it comes to valid data flows. For the most accurate DFDs, you need to become intimate with the details of the use case study and functional specification. This isn’t a cakewalk necessarily, because not all of the information you need may be present. Keep in mind that if your DFD looks like a Picasso, it could be an accurate representation of your current physical system. DFDs don’t have to be art; they just have to accurately represent the actual physical system for data flow.

Illustrate the flow of data including both inputs and outputs for the use of a data warehouse. The

Diagram should map data between source systems, operational systems, data warehouses and

Specified data marts.

Quantitative data of inputs and outputs of the processes including energy and mass flows, human labor contributions, and associated greenhouse gas emissions;

• Quantitative data of mass and energy flows at an aggregated national level including consumption, production, imports, and exports.

The data will become available in an online database, to be made accessible through a graphical user interface with flow diagram outputs to improve usability. Simply put, open access is given to both individual data-points and complete supply chains at any level of boundary conditions. A user can enter any specific good or process, and combine it with a starting and ending point for previous and successive processes; to obtain energy and resource flows within these boundaries.

By making such data available this project will contribute to future analyses which:

• Identify the effects of rising energy and material costs on individual sectors and industries;

• Create an understanding for non-integrated companies to the composition of their total supply chain;

• Demonstrate the effects of production and consumption on environmental impacts including greenhouse gas emissions and other wastes;

Patil and Rao et al. (2011) stated that A logical subset of the complete data warehouse. A data mart is a complete “pie-wedge” of the overall data warehouse pie. A data mart represents a project that can be brought to completion rather than being an impossible galactic undertaking. A data warehouse is made up of the union of all its data marts. Beyond this rather simple logical definition, we often view the data mart as the restriction of the data warehouse to a single business process or to a group of related business processes targeted toward a particular business group. The data mart is probably sponsored by and built by a single part of the business and a data mart is usually organized around a single business process.

Every data mart is imposed with some very specific design requirements. Every data mart must be represented by a dimensional model and, within a single data warehouse, all such data marts be built from conformed dimensions and conformed facts. This is the basis of the data warehouse bus architecture. Without conformed dimensions and conformed facts, a data mart is a stovepipe. Stovepipes are the bane of the data warehouse movement. If one has any hope of building a data warehouse that is robust and resilient in the facing of continuously evolving requirements, one must adhere to the data mart definition recommended. When data marts have been designed with conformed dimensions and conformed facts, they can be combined and used together (W.H. Inmon, Wiley et al, 1996).

Mike et al. (2013) argued that An OLTP system requires a normalized structure to minimize redundancy, provide validation of input data, and support a high volume of fast transactions. A transaction usually involves a single business event, such as placing an order or posting an invoice payment. An OLTP model often looks like a spider web of hundreds or even thousands of related tables. Data warehouse storage also utilizes indexing techniques to support high performance access. A technique called bitmap indexing constructs a bit vector for each value in a domain (column) being indexed. It does well for domains of low-cardinality. Bitmap indexing can provide considerable input/output and storage space advantages in low-cardinality domains. With bit vectors a bitmap index can provide dramatic improvements in comparison, aggregation, and join performance.

In a star schema, dimensional data can be indexed to tuples in the fact table by join indexing. Join indexes are traditional indexes to maintain relationships between primary key and foreign key values. They relate the values of a dimension of a star schema to rows in the fact table. Data warehouse storage can facilitate access to summary data by taking further advantage of the no volatility of data warehouses and a degree of predictability of the analyses that will be performed using them.

 

image3.emfOFFICESEMPLOYEESCUSTOMERPAYMENTSEXTRACT, TRANSFORM AND LOAD PROCESSESRELATIONAL DATABASEOPERATIONAL DATA STORE (ODS)DATA WAREHOUSEDATA MARTDATA MARTDATA MART

 

Reference:

“Information Theory & Business Intelligence Strategy – Small Worlds Data Transformation Measure –

MIKE2.0, the open source methodology for Information Development”. Mike2.openmethodology.org.

Retrieved 2013-06-14.

Patil, Preeti S.; Srikantha Rao; Suryakant B. Patil (2011). “Optimization of Data Warehousing System: Simplification in Reporting and Analysis”. IJCA Proceedings on International Conference and workshop on Emerging Trends in Technology (ICWET) (Foundation of Computer Science) 9 (6): 33–37

Building a Data Warehouse, Second Edition, by W.H. Inmon, Wiley, 1996

Knowledge Asset Management and Corporate Memory, white paper by the Hispacom Group,

To be published in Aug 1996

 

 

PRODUCTSALE ORDER PRODUCT CUSTOMERS ORDER-DETAILS PAYMENT PRODUCT_LINE OFFICES EMPLOYEES Product_ID int FK PK Order int FK PK Count int FK PK Order_ID int FK PK Created int FK PK Customer int FK PK Product_ID int FK PK Product_Name int FK PK ProductLine int FK PK Quantity int FK PK Price int FK PK Condition int FK PK ProductVenders int FK PK int FK PK Price int FK PK Discount int FK PK OptimisticLockField int FK PK GCRecord int FK PK Total int FK PK Summery int FK PK OptimisticLockField int FK PK GCRecord int FK PK Cust_ID int FK PK CustomerFirst_Name int FK PK int FK PK CustomerLast_Name int FK PK Phone int FK PK Address int FK PK City int FK PK State int FK PK Posta_code int FK PK Country int FK PK Order_ID int FK PK Product_ID int FK PK QuantityOrdered int FK PK PriceEach int FK PK OrderLineNumber int FK PK Cust_ID int FK PK CheckNumber int FK PK PaymentDate int FK PK Amount int FK PK ProductLine int FK PK TextDescription int FK PK HTMLDescription int FK PK Image int FK PK OFFICE_ID int FK PK City int FK PK Phone_number int FK PK Address int FK PK State int FK PK Country int FK PK PostalCode int FK PK Territory int FK PK Emp_Num int FK PK FirstName int FK PK LastName int FK PK Extention int FK PK Email int FK PK Office_ID int FK PK ReportTo int FK PK JobTitle int FK PK M1 M2 M3 M4 Emp_num int FK PK M1 M2 M3 M4 M1 M2 M3 M4 Cust_ID int FK PK M1 M2 M3 M4 M1 M2 M3 M4 M1 M2 M3 M4 attribute name int FK PK M1 M2 M3 M4 M1 M2 M3 M4

CUSTOMERS EMPLOYEES PAYMENTS SELL OFFICES ORDERING PRODUCTS ORDERS INCLUDES PAY PAYMENTS INVOICE STATEMENT HANDLES MANAGES

OFFICES EMPLOYEES CUSTOMER PAYMENTS EXTRACT, TRANSFORM AND LOAD PROCESSES RELATIONAL DATABASE OPERATIONAL DATA STORE (ODS) DATA WAREHOUSE DATA MART DATA MART DATA MART

Leave a Reply

Your email address will not be published. Required fields are marked *