Thursday, August 27, 2020

Json in SQL Server - Load and convert Json data

Data is the backbone of the business, and it can be in any sort of format. It can be lying in your relational databases or it can be a tweet in the twitter or a post in the Facebook or a message in the Instagram. Worldwide data is expected to hit 175 zettabytes by 2025 and over 80% of this data will be in an unstructured format due to social media and mobile applications. 

Unstructured data usually refers to information that doesn't reside in a traditional row-column database. For example, data stored in XML and JSON documents, CSV files, and Excel files are all unstructured. RDBMSs like Oracle SQL*Plus, Microsoft SQL Server, MySQL are adopting the changes and introducing the functionalities that can handle unstructured data.

My recent articles focused on such newly introduced functions that either helps in analyzing the data, importing or converting the unstructured data into a readable and meaningful format.

In this article, we will see how a Json formatted file be converted into usual row-column table format in SQL Server 2016 using OpenJson function. Json (JavaScript Object Notation) is open standard file format that consists of attribute-value pairs and array data types. Look at the below example.

Filename: “test.json”

[{"Name":"Zafar Iqbal","surname": "Farooq", "Age":30, "Address":["142", "Street 29c", "Rashidiya", "Dubai"]},

{"Name":"Tahir","surname": "Farooq", "Age":24, "Address":["12C", "Happy Street", "Al Quoz", "Dubai"]}]

 

DECLARE @Json VARCHAR(MAX)
SELECT @Json = BulkColumn
FROM OPENROWSET (BULK 'D:\MyDocuments\blog posts\JSON-SQLServer\test.json', SINGLE_CLOB) AS j 

SELECT * FROM OPENJSON (@Json)
     WITH(    Name VARCHAR(20) '$.Name',
             LastName VARCHAR(20) '$.surname',
              Age INT '$.Age',
              Building VARCHAR(10) '$.Address[0]',
              Street VARCHAR(20) '$.Address[1]',
              Area VARCHAR(20) '$.Address[2]',
              City VARCHAR(20) '$.Address[3]')

Explanation:

  • In the above code, we are importing the “test.json” contents into a variable named @Json.
     
  • OPENROWSET function is invoked to read the data from a file. Since it is a table-valued function, it returns a table with a single column and the column name must be “BulkColumn”.

  • SINGLE_CLOB reads a file as VARCHAR(MAX).

  • “As” in the above code is to provide a correlation name for the bulk rowset in the from clause. The alias name can be anything.

  • Once we populated the json data into a variable, we can proceed further to convert the data into rows and columns by parsing it using OpenJSON function. OpenJSON function accepts JSON as a parameter and it returns the key:value pairs of the first-level elements in the JSON.

  • Address field in the Json is an array data type consists of building number, street, area and city which we split during the query and the output is –

Wednesday, August 26, 2020

Sqoop Complete Tutorial Part-4

This is the continuation part of "Sqoop Complete Tutorial". If you want to read -


11) Importing data from MySQL to HDFS based on SQL query 

sqoop import-all-tables
--connect jdbc:mysql://localhost/retail_db
--username root
--password cloudera
--exclude-tables "order_items" 

Append & LastModified

The following is as per the Sqoop documentation.

Sqoop supports two types of incremental imports: append and lastmodified. You can use the --incremental argument to specify the type of incremental import to perform.

You should specify append mode when importing a table where new rows are continually being added with increasing row id values. You specify the column containing the row’s id with --check-column. Sqoop imports rows where the check column has a value greater than the one specified with --last-value.

An alternate table update strategy supported by Sqoop is called lastmodified mode. You should use this when rows of the source table may be updated, and each such update will set the value of a last-modified column to the current timestamp. Rows where the check column holds a timestamp more recent than the timestamp specified with --last-value are imported.

At the end of an incremental import, the value which should be specified as --last-value for a subsequent import is printed to the screen. When running a subsequent import, you should specify --last-value in this way to ensure you import only the new or updated data. This is handled automatically by creating an incremental import as a saved job, which is the preferred mechanism for performing a recurring incremental import. See the section on saved jobs later in this document for more information.

Below two are examples of incremental imports.

12) Importing data from MySQL to HDFS - Incremental Import

Assume that the below is the "emp" data which is in MySQL's "EMPDEPT" database.


Importing it into HDFS using the below command

sqoop import
--connect jdbc:mysql://localhost:3306/empdept
--table emp
--username root
--password cloudera
-m1

After importing it into HDFS, insert two rows in MySQL's table.

These two records need to be updated/appended in HDFS location. To achieve it use the below command.

sqoop import
--connect jdbc:mysql://localhost:3306/empdept
--table emp
--username root
--password cloudera
--check-column empno
--incremental append
--last-value 7521

13) Import and append data from MySQL to HDFS

Consider the same example specified above.

As per the data, the last entry in the table is 1981-09-28, and if we update one or two records, as shown below. 

The records which are updated in the source table should also need to be updated in the destination (HDFS). The command to do that -

sqoop import
--connect jdbc:mysql://localhost:3306/empdept
--table emp
--username root
--password cloudera   
--check-column HireDate
--incremental lastmodified
--last-value 1981-09-28
--target-dir /user/cloudera/empdept


Please click here for the next part


Find SQL scheduled jobs to run during a specific time range

There are times when we need to know which jobs are going to run in the upcoming half an hour or so. It is quite possible and easy when you invoke the job activity monitor. However, if you want to know through a query or if you want to make a report, here is the code.


SELECT
j.name, dbo.agent_datetime(next_run_date, next_run_time),
STUFF(STUFF(RIGHT(replicate('0', 6) +  CAST(active_start_time AS VARCHAR(6)), 6), 3, 0, ':'), 6, 0, ':'),
STUFF(STUFF(RIGHT(replicate('0', 6) +  CAST(active_end_time AS VARCHAR(6)), 6), 3, 0, ':'), 6, 0, ':'),
              CASE
               WHEN freq_type = 4 THEN 'Daily'
               WHEN freq_type = 8 THEN 'Weekly'
               WHEN freq_type = 16 THEN 'Monthly'
               WHEN freq_type = 32 THEN 'Monthly'
              END frequency,
              'every ' + CAST (freq_interval AS VARCHAR(3)) + ' day(s)'  Days,
              CASE
               WHEN freq_subday_type = 2
               THEN ' every ' + CAST(freq_subday_interval AS VARCHAR(7)) + ' seconds'
               WHEN freq_subday_type = 4
               THEN ' every ' + CAST(freq_subday_interval AS VARCHAR(7)) + ' minutes'
               WHEN freq_subday_type = 8
               THEN ' every ' + CAST(freq_subday_interval AS VARCHAR(7)) + ' hours'
               else ' starting at '
               +STUFF(STUFF(RIGHT(replicate('0', 6) + 
CAST(active_start_time AS VARCHAR(6)), 6), 3, 0, ':'), 6, 0, ':')             END time
FROM msdb..sysjobschedules S
JOIN msdb..sysjobs J ON S.job_id = J.job_id
JOIN msdb..sysschedules sc ON sc.schedule_id = s.schedule_id
WHERE CONVERT(VARCHAR(10), dbo.agent_datetime(next_run_date, next_run_time),101)= CONVERT(VARCHAR(10), GETDATE(),101)
AND next_run_date > 0 AND next_run_time > 0
AND CONVERT(VARCHAR(5),dbo.agent_datetime(next_run_date,next_run_time),114)>='16:00'
AND CONVERT(VARCHAR(5),dbo.agent_datetime(next_run_date,next_run_time),114)<='20:30'
AND j.enabled = 1
ORDER BY dbo.agent_datetime(next_run_date, next_run_time) 


This will return the data in the below format.

 

Name: Name of the job

Next run date-time: Next occurrence of the job that falls between the time frame mentioned in the query i.e., 16:00 to 20:30 hours.

Active Start Time: Job starting point in the day.

Active End Time: Job ending point in the day.

Frequency: Daily/Weekly/Monthly

Days: Scheduled days (every day, every 2 days, every x number of days, etc.)

Time: Interval between each occurrence.



Tuesday, August 25, 2020

Sqoop Complete Tutorial Part-3

This is the continuation part of "Sqoop Complete Tutorial". If you want to read -

Part-1, please click here


Part-2, please click here


5) Importing all the tables but excluding few tables from MySQL to HDFS 

sqoop import-all-tables
--connect jdbc:mysql://localhost/retail_db
--username root
--password cloudera
--exclude-tables "order_items" 

Note: There is no way to import only a few selected tables to HDFS. The only option available is "exclude-tables". If there is more than one table, then you can mention them in the same property and separate them using comma (,) the way given below.

sqoop import-all-tables
--connect jdbc:mysql://localhost/retail_db
--username root
--password cloudera
--exclude-tables "order_items,orders" 


6) Import part of the data from a table from MySQL to HDFS using WHERE condition. 

sqoop import
--connect jdbc:mysql://localhost/EmpDB
--username root
--table emp
--where ename='scott' 
-m 1


7) Importing a few columns from a table from MySQL database to HDFS. 

sqoop import
--connect jdbc:mysql://localhost/EmpDB
--table EMP  
--columns "EmpID,Sal”
-m 1


8) Importing and compressing the table's data and store it in HDFS. 

sqoop import
--connect jdbc:mysql://localhost/EmpDB
--username root 
--table emp
--target-dir empdata/emp
--compress -m 1


9) Importing data from MySQL to HDFS with a specific delimiter.

sqoop import
--connect jdbc:mysql://localhost/EmpDB
--table EMP
--columns "EmpID,Sal”
--fields-terminated-by '@'
-m 1

Note: This will throw an error if you do not provide the username and password to connect to MySQL. Once data is imported, read the contents of the file either using Hue browser or from Hadoop terminal using "-cat" the usual Linux command. You'll find '@' delimitation between the columns.


10) Import data by splitting it based on a specific column.

sqoop import
--connect jdbc:mysql://localhost/EmpDB
--table EMPDEPT
--username root
--P
--split-by deptno;

Note: This is one of the most important processes when we deal with a huge volume of data. If we import such huge data, the number of mappers or splits will be very less which might take a lot of time to complete and it can affect performance. Hadoop is meant for parallelism and to utilize it we use the "split" command on a column to increase more number of splits. That means more number of parallel processes in place. 

In the above example, I have employee and department data in my database which I combined and created a single table as EmpDept. Now, there are few employees in each department, say 10 departments and 50 employees in each department. Once I execute the above command, due to my data, it generates 10 splits based on the "deptno" and runs 10 simultaneous importing jobs at once. This is the way to improve import performance to achieve faster parallelism.


Sqoop Complete Tutorial Part-2

This is the continuation part of "Sqoop Complete Tutorial". If you want to read Part-1, please click here. We are about to discuss the following in the current and in the upcoming parts.

Importing data from MySQL to HDFS

  • Importing a table from MySQL to the default location of the HDFS
  • Importing a table from MySQL to user-specified location in HDFS
  • Importing all the tables from MySQL to the default location of the HDFS
  • Importing all the tables from MySQL to user-specified location in HDFS.
  • Importing all the tables but excluding few tables from MySQL to HDFS.
  • Importing part of the data from a table from MySQL to HDFS using WHERE condition.
  • Importing a few columns from a table from MySQL to HDFS
  • Importing and compressing the table's data to HDFS
  • Importing data by specifying the column delimiter.
  • Importing data by splitting the data based on a specific column.
  • Import the data from MySQL to HDFS based on a SQL Query.
  • From MySQL incremental import data into HDFS.
  • Import and append the data.
  • Import data from MySQL and store in a sequential file format in HDFS.
  • Read the contents of the source database without importing it.
  • Get the list of the databases.
The format of the import process is the same however some additional keywords will be used if the target is not HDFS.

Importing data from MySQL to Hive
  • Importing a table from MySQL to Hive's default database.
  • Importing a table from MySQL to Hive's user's database.
  • Importing all the tables from MySQL to Hive
  • Importing all the tables but excluding few from MySQL to Hive
  • Import the table's structure from MySQL to Hive without data.
  • Import the table from MySQL to Hive with the specific delimiter.
Sqoop Automation
  • Sqoop - Creating a job
  • Sqoop - Displaying the list of the jobs
  • Sqoop - Displaying the properties of a job
  • Sqoop - Executing the job
  • Sqoop - Deleting the job

Importing data from MySQL to HBase
  • Importing a table from MySQL to HBase
  • Importing a table with some selected columns from MySQL to HBase

Ok, let us begin the exercises with the above-mentioned points one by one.

Importing data from MySQL to HDFS


1) Importing a table from MySQL to HDFS default location.

sqoop import 
        --connect jdbc:mysql://localhost/retail_db 
        --table categories 
        --username root 
        --password cloudera 
        -m 1


Explanation: "Sqoop Import" is the keyword to be used while import. Sqoop is a JDBC utility to move data between Hadoop Ecosystems and relational databases. Username and password must be provided to connect to the RDBMS. 

Assigning mappers is the user's choice. Users can assign a number of mappers based on the size of the data that is being imported. In case the password not to be disclosed in the connection string, the user can simply pass -P argument without providing any password. During the run-time, it will prompt you to enter the password. Refer to below -

Importing a table from MySQL to HDFS default location - Hiding password.

sqoop import 
        --connect jdbc:mysql://localhost/retail_db 
        --table categories 
        --username root 
        --P
        -m 1

The above command will create the directory named “categories” and the table data will be stored as a file (part-m-00000) in the specified directory "/user/cloudera"

2) Importing a table from MySQL to HDFS user-specified target location.

sqoop import
--connect jdbc:mysql://localhost/retail_db
--table categories
--username root
--password cloudera
--target-dir 'retail_db'
-m 1

3) Importing all the tables from MySQL to HDFS default location

sqoop import-all-tables
--connect jdbc:mysql://localhost/retail_db
--username root
--password cloudera

Note: All the tables will be placed in the root directory as files.

4) Importing all the tables from MySQL to HDFS user-specified target location.

sqoop import-all-tables
--connect jdbc:mysql://localhost/retail_db
--username root
--password cloudera
--warehouse-dir 'retaildb'

Note: All the tables will be placed in the specified directory. The keyword "target-dir" will not work with import-all-tables. No mappers assigned since a single mapper is insufficient to import data from the "retail_db" database as the volume is higher than the HDFS block size.

 

Please click here for the next part.


Sqoop Complete Tutorial Part-1

Sqoop (SQL-to-Hadoop) is a Hadoop ecosystem component and an ETL tool that offers the capability to extract data from various structured data stores such as relational databases with the help of map-reduce. This command-line interpreter works efficiently to transfer a huge volume of the data to the Hadoop file system (HDFS, HBase, and Hive). Similarly, it exports the data from Hadoop to SQL.

Sqoop offers parallel processing as well as fault tolerance along with the automation functionality that helps in scheduling the data transfer jobs.


Sqoop Import

Imports individual or all the tables from RDBMS to HDFS or to Hive or HBase. Regardless of the target location, the table data will be stored as text files.

In HDFS, each row of the table is treated as a record, each table is treated as a sub-directory, and table data is stored as text files. Users can opt to store the table data either in text format or as binary data or in Avro format or in row-column file format or in Sequence file formats. Users can also have the privilege to opt if the data to be compressed.

In Hive, the target database must be created before importing all the tables from RDBMS. If not, the tables will be imported into the Hive’s ‘default’ database.

Sqoop Import will take care of the table’s creation if the table does not exist in Hive metastore.

Sqoop Export

Exports the files from Hadoop distributed file system to RDBMS. Sqoop will parse the contents of the files and convert them into rows and tables. Each line will be split into multiple columns based on the delimiter specified.

Before we start working with Sqoop, please note the following:

Sqoop is a command-line interface and the default HDFS terminal is used to issue Sqoop commands. There is hardly any need for GUI since the scope of Sqoop is limited to import and export the data. Please also note that Sqoop is case sensitive. One has to be very careful with the names of the table, column, directory, sub-directory, source or target locations along with the Sqoop reserved keywords.





Monday, August 24, 2020

Creating Linked Server for PostgreSQL in MSSQL

Creating a linked server in Microsoft SQL Server to connect PostgreSQL

1) Download the PostgreSQL Unicode (x64) driver to support ODBC.

2) Go to ODBC Data Source Administrator in your machine and create the system DSN.


3) Go to SQL Server Management Studio and execute the below command in a new query window.

EXEC master.dbo.sp_addlinkedserver 

@server = N'POSTGRES', 

@srvproduct=N'PostgreSQL35W', 

@datasrc='PostgreSQL35W',

@provider=N'MSDASQL', 

@provstr=N'Driver=PostgreSQL 64-bit ODBC Drivers;uid=yourUserName;Server=LocalHost;database=YourDatabaseName;pwd=DBPassword'

4) Once executed successfully, go to Object explorer.

5) Expand the Server Objects and then Linked Servers. 

Now you will be able to see the newly created linked server. If you expand further, you will see the database name in the catalogs.


Please do let me know if you are facing any issues.


Automation - PostgreSQL Restore Data

There are three methods to backing up PostgreSQL databases

  • SQL dump
  • File System Level backup
  • Continuous archiving

SQL Dump command used to take a full database backup. The backup file consists of SQL commands which will create the database when it is gets restored. The below command is used to take a backup through the CLI.

pg_dump dbname > dumpfile

The dumpfile can be restored in other testing or staging servers by using the following command.

psql dbname < dumpfile


Now, if we want both to be handled in a single transaction, like taking a backup and restoring it in another environment -

pg_dump -h host1 dbname | psql -h host2 dbname

The ability of pg_dump and psql to write to or read from pipes makes it feasible to dump a database immediately from one server to another in a single session. In other words, pg_dump helps in creating the dump of "dbname" database from "host1" server, and psql helps in restoring the same into database "dbname" in "host2" server.

These are all good for instant operation however we all prefer to automate both database backup and restore operations. But, it is a bit difficult in PostgreSQL to achieve such a scheduled or automated process to backup and restoration. Free tools may not offer much in this case and most of the companies might not prefer to install any third-party tools without knowing how safe they are. 

So, this leaves us to create windows scheduler tasks to automate them.

Automating the data restoration in another instance/server:

Go the server in which you want to restore the database and create the windows task with the help of the below steps.

1) Go to Task Scheduler.

2) Right-click on "Task Scheduler Library"

3) Go To Actions

4) Click New

5) Action should be "Start a program"

6) In the Settings, Program/script - copy the below executable file.

C:\Windows\System32\cmd.exe

7) In the "Add arguments (optional)" add the following command.

/c "psql -U yourUserName yourDatabaseName  < D:\dbname.sql"

8) Click OK

9) Go to "Triggers" and Select "New" to configure the schedule

10) Select "Daily" and select the recurrence details.

11) Tick on Stop task if it runs longer than "1 Hour" (this is optional)

12) Click OK, FINISH.


Automating the backup of the database.

All the above steps to be followed and the command should be replaced with -

/c "pg_dump -h localhost -p 5432 -U yourUserName yourDatabaseName > D:\dbname.sql"

Here pg_dump is connecting the localhost using default port number 5432 to the database you specified and the backup file will be copied to "D" drive. Change these params based on your environment.


If you are not familiar or happy with such implementation, simply you can install some free GUI tools available for database backups. 

For example, SQLBackupAndFTP

This is free and supports two database backup.


Hope you find this article helpful, do let me know if you need my assistance.

Big Data & SQL

Hi Everybody, Please do visit my new blog that has much more information about Big Data and SQL. The site covers big data and almost all the...