Feed aggregator

SQL Server: Generating SQL script using PowerShell and Template file

Yann Neuhaus - Sun, 2020-10-18 11:46

In this blog post, I will share with you a small PowerShell script I did recently.

I have noticed that my customer performs a very repetitive and time-consuming task almost every day.
New columns are added to tables on their business-critical database and they need to maintain SQL scripts file with all the ALTER TABLE statements for each new column.

For every new column, my customer copy-pastes the following SQL Script and then change parts of it.

/***********************************
*
* New column 
*            Schema:       Order
*            Table:        TestTable2     
*            Column:       ColumnName1    
*            
* History    
*            Date:         18/10/2020 
*            User:         Steven Naudet 
*
************************************/

IF NOT EXISTS (
       SELECT * 
       FROM sys.tables AS t 
       JOIN sys.[columns] AS c ON t.[object_id] = c.[object_id]
       JOIN sys.schemas AS s ON s.[schema_id] = t.[schema_id]  
       WHERE 1=1 
       AND s.name = 'Order'  
       AND t.name = 'TestTable2' 
       AND c.name = 'ColumnName1' 
) 
BEGIN 
       PRINT 'Altering table Order.TestTable2 adding column [ColumnName1]' ; 
       ALTER TABLE [Order].TestTable2 
       ADD 
       ColumnName1 NOT NULL; 
END 

/***********************************
*
* End New column ColumnName1  
*
************************************/

The highlighted lines are manually edited by my customer every time there’s a new column to be added to the database, which can occur 20 times per week.
I decided to write a PowerShell function to do this task faster so my customer can work on more interesting things instead.

The idea is to use a Template file for the SQL Script. The file is similar to the SSMS templates.
The PowerShell script modifies the template and as output sends the SQL to Clipboard using Set-Clipboard.
Consecutive calls to the function will add the SQL commands after one another in the Clipboard. This way my customer can just Paste the generated SQL script to his SQL source control tool.

You can see the script in action with the GIF below.

PowerShell Script in action GIF

Here is the script.

function New-AddColumnSQL {

    [CmdletBinding()]
    param (
        [Parameter(Mandatory=$true)][string] $Schema,
        [Parameter(Mandatory=$true)][string] $Table,
        [Parameter(Mandatory=$true)][string] $Column,
        [Parameter(Mandatory=$true)][string] $Type,
        [Parameter(Mandatory=$false)][string] $defaultValue,
        [Parameter(Mandatory=$false)][switch] $isNotNull = $false,
        [Parameter(Mandatory=$false)][string] $User = 'Steven NAUDET'
    )

    $TemplateFile = 'Z:\scripts\TemplateAddColumn.sql'

    $Clipboard = Get-Clipboard
    
    # Clear Clipboard if first call to the function
    if ($Clipboard -like '*Altering table*') {
        $returnMessage = 'SQL Script appended to Clipboard'
    } else {
        $returnMessage = 'SQL Script pasted to Clipboard'
        Set-Clipboard -Value $null
    }

    $ColumnDef = $Type

    # NOT NULL
    if($isNotNull) { 
        $ColumnDef = $ColumnDef + ' NOT'
    }
    $ColumnDef = $ColumnDef + ' NULL'

    # DEFAULT value
    if($defaultValue) { 
        $ColumnDef = $ColumnDef + ' DEFAULT ' + $defaultValue
    }

    $SQLscript = Get-Item -Path $TemplateFile | Get-Content
    
    $SQLscript = $SQLscript.Replace('<Date>', (Get-Date -UFormat "%d/%m/%Y"))
    $SQLscript = $SQLscript.Replace('<SchemaName>', $Schema)
    $SQLscript = $SQLscript.Replace('<TableName>', $Table)
    $SQLscript = $SQLscript.Replace('<ColumnName>', $Column)
    $SQLscript = $SQLscript.Replace('<UserName>', $User)
    $SQLscript = $SQLscript.Replace('<ColumnDefinition>', $ColumnDef)

    Set-Clipboard $SQLscript -Append

    return $returnMessage

}

There’s probably a lot of room for improvement for this code but the goal of this blog post is to show you how handy PowerShell can be. It can help you save a lot of time.
I took about 1 hour to write this code and I’m sure my customer will save more than that every month.

Cet article SQL Server: Generating SQL script using PowerShell and Template file est apparu en premier sur Blog dbi services.

Oracle Database Appliance vs Oracle Cloud Infrastructure

Yann Neuhaus - Sun, 2020-10-18 05:33
Introduction

Oracle Database Appliances are very popular these days. And not only among new customers for this kind of engineered systems. Almost all customers already using old generation ODAs are renewing their infrastructure by choosing again ODAs, meaning that the solution is good enough and probably better than anything else. But now, public clouds are a real alternative to on-premise servers, and Oracle Cloud Infrastructure is a solid competitor vs Amazon and Azure public clouds. So what’s the best solution for your databases, ODA or OCI? Let’s do the match.

Round 1 – Cost

Yes, it is important. You will need to buy ODAs and you will need a budget for that. Nothing new regarding this platform, it requires an investment. ODA is cheaper since light models are available, but if you need significant amount of storage, it comes at a cost. But hopefully, the cost is quite similar to another x86 platform, and the ODA doesn’t have these hidden costs due to additional work for troubleshooting compatibility issues.
Cost works differently on OCI. Basically, you will pay for servers, storage, services on a monthly basis. No initial investment is needed, and that is one of the advantages of OCI. However, don’t expect the “TCO” to be lower than acquiring your own hardware. I do not mean that cloud solutions are expensive, but the cost will be quite similar to an on-premise solution after some years. Going to the cloud is mainly changing your mind about what’s an infrastructure. Is it servers you manage on your own or is it a platform for running your information system?
There is no winner in this round, you will only know after several years which solution would have been the less expensive.

Winner: none

Round 2 – Deployment speed

ODA allows fast deployment of a new database infrastructure. Actually, it’s the best on-premise solution regarding that point. And it’s a serious advantage over DIY platforms. Being able to create your first database the same day you open the box is quite nice. But OCI is even better, because at this very moment we are talking now, your future servers are already available, Terabytes of storage are waiting for you, and databases are almost there, few clicks away from now. If you’re looking for fast deployment, OCI is an easy winner.

Winner: OCI

Round 3 – Security

Everybody is talking about security. Is my database safer in the cloud than in my own datacenter? Actually, it’s quite hard to tell. For sure, OCI is a public cloud, meaning that your database can be reached from virtually everywhere. But you will probably build strong security rules to protect your cloud infrastructure. You will use IPSec VPN between OCI and your on-premise site, or a FastConnect channel to dedicate a link between your on-premise equipment and OCI avoiding data to transit through the internet. Putting your database in the cloud is not less secure than giving remote connection on your infrastructure to your employees or providers. Furthermore, databases in OCI are stored using encryption, even with Standard Edition and without the need for Advanced Security option.
On ODA, you database is in your network, meaning not on something public and meaning less visible. This is good, but again, only if you have good security rules inside your company.

Winner: none

Round 4 – Performance

ODA is a strong performer, especially the X8-2M model. With up to 75TB of NVMe SSD, it’s quite tough to achieve better performance with anything else. Yes you could grab few MB/s more or ms less with few other solutions, but do you really think that your users will see the difference? No. And what about OCI? OCI rely on SSD storage only, that’s a very good start. And do they offer NVMe? Yes, for sure. Bare metals servers (BM Dense I/O) provide up to 51TB of RAW capacity based on 8 NVMe drives. And something tells me that these servers are actually nearly the same as ODA X7-2Ms. So expect similar performance on both solutions.

Winner: none

Round 5 – License management

No doubt that on-demand capacity of Enterprise licenses is one of the key feature of the ODA. You can start with only 1 Enterprise license on each ODA, and increase the number of licenses when you need more resources. A kind of fine tuning for the licenses.

On OCI, you can choose to bring your own license you bought long time ago, and keep your investment for later if for some reason you would like to go back to on-premise infrastructure. Or you can choose to include the license fees into the monthly fees. With the first option, you manage your licenses as you always did, and should be careful when you increase the cloud resources dedicated to your databases (mainly the oCPUs). With the second option, you don’t have to manage your licenses anymore: you don’t need to buy them, pay the yearly support, or review them regularly because all is included with your OCI database services. It’s simply a great feature.

Winner: OCI

Round 6 – Simplicity

ODA and OCI share the same goal: simplify your database infrastructure. ODA is simplifying by providing the best automation available for deploying complex Oracle stack. And when you come from an existing on-premise infrastructure, migration to ODA will be quite easy. OCI looks even more simplifying, but if you will not have to work on the servers, you’ll have to think about how to implement your infrastructure. Which subnet for my databases? Should I also move my application servers? What kind of network connectivity with my on-premise environment? Which kind of database service fits my needs?

If you’re starting from scratch with Oracle databases, it’s probably more simple to go directly to OCI. If you’re migrating from an existing on-premise environment, it’s probably more simple to replace your existing servers with ODAs. No winner here.

Winner: none

Round 7 – Control

For some customers, being able to control their infrastructure is vital. On public clouds, you will not have control on everything, because someone will do a part of the maintenance job, mostly automated tasks. And this is for some other customers something they don’t want to manage. On ODA, you control everything on your server: first, it’s not mandatory to connect it to the internet. Updates on ODA cannot be automated and will be applied manually through good old zipfiles, and in case of serious problems, ODA is fast to redeploy. So if you need to have total control over your infrastructure, the ODA is the best solution for you.

OCI is only a good solution if you already planned to lose some control, for obvious workload reasons.

Winner: ODA

Round 8 – Resilience

Disaster recovery solutions were not so common 10 years ago. People were relying on tape backups, were confident about this solution and were believing they would be able to restore the tape “somewhere”, without asking them where actually was “somewhere”. At best, the old servers were kept for disaster recovery usage, in the same rack.
This has definitely changed, and now disaster recovery is part of each new infrastructure design. And regarding the software side of the database, this is something mature and highly reliable (with Data Guard or Dbvisit standby). The most complex part being to design the split into multiple datacenters (2, most of the time). Implementing that cleverly, avoiding Single Point Of Failure that could wipe out the efforts to achieve high resiliency, being a tough challenge. ODA is a server like others, and you will have to do the same amount of work to design a high resilient infrastructure.

Cloud providers have been thinking about disaster recovery since the very beginning. The datacenters are spread all around the world, and each one has separate availability domains (isolated building blocks), allowing multiple levels of disaster recovery scenarios. Furthermore, storage and backups naturally embed this high resilience. And as everyone will use the same mechanisms, you can trust OCI regarding resilience.

As a conclusion, it’s nearly impossible to reach the level of resilience of OCI on your on-premise ODA infrastructure, that must be said…

Winner: OCI

What about the other solutions?

For sure, it still possible to build your own database infrastructure with classic servers. But do you really have time for that?
EXADATA is also a nice solution if you need such a beast for multi-TB databases with high number of transactions or fastest BI platform. And now it can bring you both the advantages of OCI and appliance with the Cloud@customer mode. Oracle brings the server in your datacenter, and you only pay for it monthly as if you were using it in the cloud.
Hybrid solution with a mix of ODA of OCI could also fit your needs but you’ll have to manage both technologies, and that’s not so smart. Unless you need this kind of solution for the transition to the cloud…

Conclusion

Is ODA better than OCI? Is OCI better than ODA? Both solutions are smart choices and none will disappoint you if you achieve to leverage the advantages and avoid the constraints of each one. On OCI, you will benefit from immediate availability of the resources, fast provisioning, flexibility, no-brainer license management. With ODA, you will keep your database infrastructure at home, and you will have strong performance and full control over your servers, including for the cost. Choosing between these two solutions is only a matter of strategy, and this does not only concern the DBA.

Cet article Oracle Database Appliance vs Oracle Cloud Infrastructure est apparu en premier sur Blog dbi services.

Followup with Database runInstaller applyRU Failed Me

Michael Dinh - Sat, 2020-10-17 22:57

This is a followup Database runInstaller applyRU Failed Me

I finally figured out my error which I should have seen from the beginning and better error reporting would have helped.

Can you guess what’s wrong?

unzip -qod $ORACLE_HOME /vagrant_software/p6880880_190000_LINUX.zip; echo $?
versus
unzip -qod $ORACLE_HOME /vagrant_software/p6880880_190000_Linux-x86-64.zip; echo $?

The wrong platform for opatch was used.
Here is what should have been deployed.

[oracle@ol7-112-dg1 ~]$ unzip -qod $ORACLE_HOME /vagrant_software/p6880880_190000_Linux-x86-64.zip; echo $?
0
[oracle@ol7-112-dg1 ~]$
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
OPatch succeeded.
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.21
OPatch succeeded.
[oracle@ol7-112-dg1 ~]$

[oracle@ol7-112-dg1 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/19.3.0.0/db_1
[oracle@ol7-112-dg1 ~]$

### This failed:
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
/u01/app/oracle/product/19.3.0.0/db_1/OPatch/opatch: line 839: [: too many arguments
/u01/app/oracle/product/19.3.0.0/db_1/OPatch/opatch: line 839: [: too many arguments
Java (1.7) could not be located. OPatch cannot proceed!
OPatch returns with error code = 1
[oracle@ol7-112-dg1 ~]$

### This works but why?
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch version -jdk $ORACLE_HOME/jdk
OPatch Version: 12.2.0.1.21
OPatch succeeded.
[oracle@ol7-112-dg1 ~]$

### Here is java version and noticed it's 64-Bit
[oracle@ol7-112-dg1 bin]$ $ORACLE_HOME/jdk/bin/java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
[oracle@ol7-112-dg1 bin]$

$ORACLE_HOME/runInstaller -applyRU /home/oracle/patch/31305339 should now work.

Pretzel Logic

Greg Pavlik - Sat, 2020-10-17 18:12

 

Database runInstaller applyRU Failed Me

Michael Dinh - Sat, 2020-10-17 12:24

I still remembered the cliche a manager used to tell me, “Slow and steady win the race.”

Looks like it is true with Oracle software as you never know what’s going to get.

I am trying to install 19.3 database software and apply Patch 31305339 – GI Release Update 19.8.0.0.200714 which failed misserably.

There is little to no information for the failure (at least from what I was able to ascertain).

Resolving the issue requires starting over from the beginning.

Here are the steps taken:

[oracle@ol7-112-dg1 ~]$ cat /etc/system-release
Oracle Linux Server release 7.7

[oracle@ol7-112-dg1 ~]$ ll /etc/ora*
-rw-r--r--. 1 root   root      32 Aug  8  2019 /etc/oracle-release
-rw-rw-r--. 1 oracle oinstall 790 Oct 14 19:13 /etc/oratab

[oracle@ol7-112-dg1 ~]$ echo $new_db_home
/u01/app/oracle/product/19.3.0.0/db_1

[oracle@ol7-112-dg1 ~]$ rm -rf $new_db_home

[oracle@ol7-112-dg1 ~]$ ls $new_db_home
ls: cannot access /u01/app/oracle/product/19.3.0.0/db_1: No such file or directory

[oracle@ol7-112-dg1 ~]$ echo $zip_loc
/vagrant_software

[oracle@ol7-112-dg1 ~]$ ls -l $zip_loc/LINUX.X64_193000_db_home.zip
-rwxrwxrwx. 1 vagrant vagrant 3059705302 Sep  5  2019 /vagrant_software/LINUX.X64_193000_db_home.zip

[oracle@ol7-112-dg1 ~]$ unzip -qo $zip_loc/LINUX.X64_193000_db_home.zip -d $new_db_home; echo $?; ls $new_db_home
0
addnode     crs   dbjava       dmu      hs             jdbc  md       olap     ords  plsql    rdbms          runInstaller   sqlj      ucp
apex        css   dbs          drdaas   install        jdk   mgw      OPatch   oss   precomp  relnotes       schagent.conf  sqlpatch  usm
assistants  ctx   deinstall    dv       instantclient  jlib  network  opmn     oui   QOpatch  root.sh        sdk            sqlplus   utl
bin         cv    demo         env.ora  inventory      ldap  nls      oracore  owm   R        root.sh.old    slax           srvm      wwg
clone       data  diagnostics  has      javavm         lib   odbc     ord      perl  racg     root.sh.old.1  sqldeveloper   suptools  xdk

[oracle@ol7-112-dg1 ~]$ $new_db_home/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.

[oracle@ol7-112-dg1 ~]$ unzip -qo $zip_loc/p6880880_190000_LINUX.zip -d $new_db_home; echo $?
0

[oracle@ol7-112-dg1 ~]$ $new_db_home/OPatch/opatch version
/u01/app/oracle/product/19.3.0.0/db_1/OPatch/opatch: line 839: [: too many arguments
/u01/app/oracle/product/19.3.0.0/db_1/OPatch/opatch: line 839: [: too many arguments
Java (1.7) could not be located. OPatch cannot proceed!
OPatch returns with error code = 1

[oracle@ol7-112-dg1 ~]$ $new_db_home/OPatch/opatch version -jdk $new_db_home/jdk
OPatch Version: 12.2.0.1.21

OPatch succeeded.

[oracle@ol7-112-dg1 ~]$ cd $new_db_home
[oracle@ol7-112-dg1 db_1]$ ls -l /home/oracle/patch/31305339
total 132
drwxr-x---. 5 oracle oinstall     81 Jul 10 05:20 31281355
drwxr-x---. 5 oracle oinstall     62 Jul 10 05:17 31304218
drwxr-x---. 5 oracle oinstall     62 Jul 10 05:18 31305087
drwxr-x---. 4 oracle oinstall     48 Jul 10 05:20 31335188
drwxr-x---. 2 oracle oinstall   4096 Jul 10 05:18 automation
-rw-rw-r--. 1 oracle oinstall   5054 Jul 10 05:46 bundle.xml
-rw-rw-r--. 1 oracle oinstall 120878 Jul 20 19:04 README.html
-rw-r--r--. 1 oracle oinstall      0 Jul 10 05:17 README.txt

[oracle@ol7-112-dg1 db_1]$ pwd
/u01/app/oracle/product/19.3.0.0/db_1

[oracle@ol7-112-dg1 db_1]$ ./runInstaller -debug -applyRU /home/oracle/patch/31305339
Preparing the home to patch…
Applying the patch /home/oracle/patch/31305339…
OPatch command failed while applying the patch. For details look at the logs from /u01/app/oracle/product/19.3.0.0/db_1/cfgtoollogs/opatchauto/.
[oracle@ol7-112-dg1 db_1]$ cd /u01/app/oracle/product/19.3.0.0/db_1/cfgtoollogs/opatchauto
-bash: cd: /u01/app/oracle/product/19.3.0.0/db_1/cfgtoollogs/opatchauto: No such file or directory
[oracle@ol7-112-dg1 db_1]$ cd /u01/app/oracle/product/19.3.0.0/db_1/cfgtoollogs/
[oracle@ol7-112-dg1 cfgtoollogs]$ ll
total 4
drwxrwx---. 2 oracle oinstall 4096 Oct 17 16:44 oui
[oracle@ol7-112-dg1 cfgtoollogs]$

[oracle@ol7-112-dg1 db_1]$ ./runInstaller -debug -applyRU /home/oracle/patch/31305339

ERROR: The home is not clean. This home cannot be used since there was a failed OPatch execution in this home. Use a different home to proceed.

[oracle@ol7-112-dg1 db_1]$


Executing "gridSetup.sh" Fails with "ERROR: The home is not clean"(Doc ID 2279633.1)
Fails with "ERROR: The home is not clean" .

[root]# rm-Rf 
[oracle]$ unzip linuxx64_12201_grid_home.zip -d 

Database 19c Upgrade Land Mines And Resources

Michael Dinh - Sat, 2020-10-17 08:11

Here are the contents of for a tweet I started.

Hard to imagine setting SQLPATH ahd glogin.sql would affect upgrade.

That’s the only changes I made.

“ORA-04023: Object SYS.STANDARD Could Not Be Validated or Authorized” during database upgrade (Doc ID 984511.1)

Documentation does not mention having customized glogin.sql and/or SQLPATH will cause upgrade to fail. Documentation has now been updated.

You can read more about Upgrade and profile scripts

AutoUpgrade validates glogin for you before starting. It looks like any improvements are only added to AutoUgrade and not traditional methods. Not only do you need to read the documentation, you also need to read again because of changes. 

Lastly, here are Useful List of Examples which I find helpful.

Oracle Apex 20.2 REST API data sources and database tables

Kubilay Çilkara - Sat, 2020-10-17 02:11

Oracle Apex 20.2 is out with an interesting new feature!  The one that caught my attention was the REST Data Source Synchronisation feature. 


Why is REST Data Source feature interesting? 


Oracle Apex REST Data Source Synchronisation is exciting because it lets you query REST endpoints on the internet on a schedule or on-demand basis and saves the results automatically in database tables. I think this approach could suit better slow-changing data accessible with REST APIs. If a REST endpoint data is known to be changing every day, why should we call the REST endpoint via HTTP every time we wanted to display data on an Apex page? Why should we do too many calls, cause traffic and keep machines busy for data which is not changing? Would it not be better to store the data in a table in the database, think cache here, and display it from there with no extra REST endpoint calls every time a page is viewed? Then automatically synch it by making an automatic call to the REST endpoint on predetermined intervals?


This is exactly what the REST Data Source Synchronisation does. It queries the REST API endpoint and saves (caches) the JSON response as data in a database table on a schedule of your choice. 


For my experiment I used the London TfL Disruption REST Endpoint from the TfL API which holds data for TfL transportation disruptions. I configured this endpoint to synchronise my database table every day. 



I created the Oracle Apex REST Data source inside apex.oracle.com I used the TfL API Dev platform provided key to make the call to the TfL REST endpoint and I am synching it once a day on an Oracle Apex Faceted Search page. 


I was able to do all this with zero coding, just pointing the Oracle Apex REST Data Source I created for the TfL API to a table and scheduling the sync. To see the working app go to this link: https://apex.oracle.com/pls/apex/databasesystems/r/tfl-dashboard/home






Categories: DBA Blogs

load date into apex 20.1 with defaults for some columns

Tom Kyte - Fri, 2020-10-16 10:06
I want to insert data from excel or csv file (Id, Name, Address, Amount) into database table by using data load but during the insert of this data i want to decide this data is related to HSBC Bank (Because i have many bank and each bank have thousand of customer). so how i can insert the name of HSBC bank to the table during the load of data (Note that the excel or csv file did not contain the name of HSBC bank).
Categories: DBA Blogs

Azure Cosmos DB: All You Need to Know

Online Apps DBA - Fri, 2020-10-16 07:25

Azure Cosmos DB is the globally distributed database service from Microsoft. Build applications with guaranteed high availability and low latency anywhere, at any scale, or migrate Cassandra, MongoDB, and other NoSQL workloads to the cloud. Want to know more about the Azure Cosmos DB? Read the blog post at https://k21academy.com/dp90012 to learn more. The blog […]

The post Azure Cosmos DB: All You Need to Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

VCN Flow Logs for Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Fri, 2020-10-16 07:02

[New Update] Oracle VCN Flow Logs now available in the commercial realm Ever wondered what goes in and out of your network in the cloud and how? VCN Flow Logs allows you to view connection information for traffic within, and in and out of your virtual cloud network (VCN). Check out this blog at https://k21academy.com/oci74 […]

The post VCN Flow Logs for Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Microsoft Azure AZ-303 Automate Deployment And Configuration Of Resources

Online Apps DBA - Fri, 2020-10-16 06:47

Keen on getting an idea of different ways to deploy resources and know-how ARM Templates helps to deploy resources? Then check out K21academy’s blog post at – k21academy.com/az30312 & you will see major aspects of deployment and configuration of resources. Areas covered in this blog post: • What Are ARM And ARM Templates? • Evaluate […]

The post Microsoft Azure AZ-303 Automate Deployment And Configuration Of Resources appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Kubernetes Security For Beginner – CKS

Online Apps DBA - Fri, 2020-10-16 05:22

Kubernetes is the most used platform among admins. Given the platform’s popularity and high adoption rates, it’s important for developers to ensure that Kubernetes security is always prioritized. To know more about Kubernetes Security check out this blog https://k21academy.com/kubernetessec13 The blog will cover: ▪️ What Is Kubernetes security? ▪️ Kubernetes Security Features ▪️ RBAC ▪️ […]

The post Kubernetes Security For Beginner – CKS appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Microsoft Azure Data Fundamentals [DP-900]: Step By Step Activity Guides (Hands-On Labs)

Online Apps DBA - Fri, 2020-10-16 04:40

The DP-900 Microsoft Azure Data Fundamentals Certification is intended for those candidates who want to start working with data on the cloud, get basic skills in cloud data services, and also build foundational knowledge of cloud data services in Microsoft Azure. If you are planning to take the Microsoft Azure Data Fundamentals [DP-900] Certification then […]

The post Microsoft Azure Data Fundamentals [DP-900]: Step By Step Activity Guides (Hands-On Labs) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Upgrade Data Migration Choices – HP to Linux

Bobby Durrett's DBA Blog - Thu, 2020-10-15 18:40
1. Introduction

I just finished moving and upgrading an Oracle database from 11.2.0.4 on HP Unix Itanium to 19c on x86-64 Linux on VMWare along with an army of coworkers on our project team. I want to use this post to describe the choices I considered for migrating the data from the old to the new system. I mention my coworkers because this was a major collaborative effort and I do not want to take credit for the work of others. But I did have to think through the best way to move this big database cross platform and I think that my evaluation of those options in dialog with the project team was a way that I contributed. So, I can blog about it. Also, I think it is worth posting about this subject because the choice of data migration method was challenging and interesting to me. So, if I can find a way to communicate about it clearly enough it might have value to others and even to myself if I come back to this later.

First, I think I should explain some of the parameters for this upgrade/migration. The database files total about 15 terabytes of space. I guess actual data is 7-8 TB but everything else together adds up to 15. The database has tens of thousands of partitions and subpartitions. These partitions/subpartitions are manipulated by batch processes on the weekend and in the weekday evenings. Some of the tables are partitioned by date but others by columns that are not date related. The source system is running 11.2.0.4 and the target system 19c. The source and target platforms have different endianness. At the beginning of the project our goal was to minimize downtime for the final cutover weekend, so I tried to find methods of cutting over from the old to new database with only an hour or two of downtime max. The system has an online web front end that business users use during the weekdays as well as very resource intensive batch jobs on the weekend and on some weeknights. One goal of the project besides upgrading to a newer platform was to improve performance. On some weekends the batch processes did not complete in time for the users to see the updated data on Monday morning. The much faster x86-64 processors on the new system should speed up the weekend batch processes to meet the business need to complete before the Monday workday. CPU was pegged for several hours each weekend on the older HP Itanium system so we knew that newer, faster CPUs would help.

2. Transportable Tablespaces and Cross Platform Incremental Backup

Next, I want to talk about different options we considered. Another DBA on my team worked with an earlier proof of concept for this project to show the potential benefits of it. At some point she tried out the ideas in an earlier version of this Oracle support document:

V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

This is a fairly complicated procedure designed to speed up the move from Unix to Linux by allowing you to use Transportable Tablespaces to move your data over, converting to the new endianness, and then apply incremental backups of the source system to catch up any changes, changing endianness of the backups as you go. Transportable Tablespaces are kind of a pain, but I have worked on them before and they can be done. But the key limitation of this process is that just before you go live on the new system you must use Data Pump to import all the metadata from your source database. But since we have tens of thousands of partitions and subpartitions the time to import the metadata could be several hours. So, I felt like we could use this approach but with all the complexity and risk we still might have a lengthy period of downtime and another approach might work better. I think the key lesson from this approach is how hard it is to change endianness using a physical backup when you have a huge data dictionary. If we were moving from Linux to Linux, we probably could have used some sort of physical copy along with the application of incremental changes without the big metadata import at the end. But with the big data dictionary and the cross-platform nature of the upgrade the physical copy option (transportable tablespaces) seemed too slow.

One reason I liked this option was that it would work well with NOLOGGING changes. I was concerned early on that I could not use replication methods that pulled changes from the redo logs because some updates would not be logged. But incremental backups do pick up blocks that are updated even if the updates are not written to the redo logs. Eventually I did a bunch of research and found that I could turn on FORCE LOGGING on the source database and that opened up the option of using the redo logs. The problem of the long time to export/import the metadata with the Transportable Tablespaces option pushed me to pursue the FORCE LOGGING research to make sure we could use it.

3. Pre-Loading Static Historical Data

The second approach I considered and discussed with the team early on was exporting and importing historical data in advance and only migrating over actively updated data during the cutover window. If you have most of your data in tables that are partitioned by a date column you could potentially pre-load the older static data on your new system and then migrate a small percentage of your 15 terabyte database during the cutover window. I found that about 4 terabytes had not been updated in the past 90 days and that made me stop pursuing this option. Here is the script I used: statictabsum.sql. I just looked at the last_analyzed date on tables, indexes, partitions, and subpartitions. Not perfect but it gave me some idea of how much data was static. Slightly edited output:

    >select sum(bytes)/(1024*1024*1024*1024) total_tb
  2  from withlastanalyzed
  3  where
  4  last_analyzed < sysdate - &&DAYSOLD;

  TOTAL_TB
----------
4.34381223

The system that I was upgrading is a pretty complicated system with many batch jobs and many interfaces with external systems. I would have to work with the development team to understand what data was truly static and there would be the chance that we would make a mistake and not copy something that we should. I would hate to go live and find that I missed some data that should have been updated on the partitions that I thought were static. Yikes! With my imperfect estimate of how much data could be pre-loaded and knowing the complexity of the system I felt that it was not worth trying the pre-load of the static data.

4. Oracle GoldenGate Replication

The third thing I tried, and thought would work really well for us, was Oracle GoldenGate replication. Several years ago, a coworker of mine had upgraded and migrated the database for a transactional system using GoldenGate with minimal downtime. Oracle has a white paper about how to do this:

Zero Downtime Database Upgrade Using Oracle GoldenGate

The idea is to start capturing changes to your source system and then do a full export/import to your target system. Then apply all the changes that happened on the source database during the export/import to the target db using GoldenGate. We had this setup and began testing it but we had failures because of changes to table partitions and subpartitions by the application batch jobs. We had setup GoldenGate to replicate DML but not DDL. We researched it in the manuals and found that to replicate DDL (partition drops and adds) the source system had to be 11.2.0.4 or later and have compatibility set to 11.2.0.4 or later to do DDL replication in the most current and desirable way. See this section of the 12.2.0.1 installation manual:

13.1.1 Support for DDL capture in integrated capture mode

Unfortunately, even though our source system was on 11.2.0.4 the compatible parameter was set to 11.2.0, the default. I wonder why 11.2.0.4 defaults to compatible=11.2.0? Sure, we could change the parameter, but you cannot easily back out a change to compatible. And we would essentially have to retest our entire application before changing it. Given that our project was running out of time and money that was not an option. So, that led to our final choice.

5. Data Pump Export and Import

The last migration method that we considered and the one we just used in the production migration was Oracle Data Pump export and import, the expdp and impdp utilities. This is the simplest and most widely used method for copying data from one Oracle database to another and it works great across platforms with different endianness and different versions of Oracle. We used this method to populate our new databases for testing. Originally we thought about using the same data migration method for testing that we planned to do for the final production cutover but we did not have the time and money to get GoldenGate setup before all the testing began so we were not able to use it to populate our test databases. We had to use the simpler Data Pump utility. We had a whole crew of people lined up to work on the project, so we had to simply do a full export/import and cut them loose. As the project went on we streamlined our export/import method including working with Oracle support to get a fix for a bug that affected our data migrations. As our project had dragged on for a number of months beyond our initial deadline and as we discovered that GoldenGate was not going to work for us without a lot of more work I started to think about using Data Pump export/import, our last resort. At the time it was taking 50 hours to do the full export/import and I asked if we could get approval for two extra days of downtime – two full working days of application downtime. To me 50 hours to migrate a 15-terabyte database was not bad. I had expected it to take several days based on prior experiences populating dev and test databases from this same source system. The HP Unix hardware was older and had slower network adapters so that was part of the reason for the long data migration time.

Once we got the approval to take the extra downtime, I had to fend off suggestions to go back to preloading the historical data or to ignore tables that we think are obsolete. By this time, I was convinced that a full export/import made the most sense because all our testing was done on systems that were populated with the same method. We had the business approval for the downtime, and we knew that we had been testing for months on databases that had been populated with this method. Finally, our Unix/Linux/Storage team came up with a clever way to cut our export/import time almost in half without changing anything on the database side. Essentially, they figured out how to move the filesystem that we were exporting to onto an HP Unix blade with a faster network adapter. The original system had a 1 gigabit network adapter and the new system had 10 gigabit. Also, they setup an NFS mounted filesystem so we could import straight over the network rather than copy all the files to the target server and have all that duplicate storage space. We tested this approach several times and then the real thing went in without issues and even a little faster than we expected.

6. Conclusion

My main concern for this project was how to migrate such a large database and change the endianness of the data in a short enough time to meet the needs of our users and our company. This concern drove me to investigate several potentially complicated approaches to this data migration. But in the end, we used the simplest and surest method that we had already used to populate our test databases. We took several steps to tune our full Data Pump export/import process. The seven largest tables were broken out into their own parfile and exported to disk uncompressed in parallel. Their indexes were rebuilt parallel nologging. Then the Unix/Linux/Storage team did their magic with the faster network adapter. But even with these helpful performance enhancements our approach was still simple – Data Pump export and import. The more exotic methods that we tried were thwarted by the nature of the database we were upgrading. It had too many subpartitions. The application manipulated the subpartitions during the week. We had the wrong compatible value. Finally, budget and schedule concerns forced the decision to go with what worked, export/import. And in the end, it worked well.

I want to conclude this post with high level lessons that I learned from this process. There is value in simplicity. Do not use a more complex solution when a simpler one meets your needs. The Transportable Tablespaces and GoldenGate options both were cool to work on and think about. But it takes time to dig into things and complexity adds risk of failure. If a simpler solution is safer, meets the business need, and takes less time and money why not use it? I guess the other high-level lesson is that it is easier to get approval for downtime when your project is behind schedule and over budget. When push came to shove downtime was less important than when we went live. Maybe if I had pushed for the longer downtime from the beginning and planned all along to use export/import the project would have been shorter. But I did not know that GoldenGate would run into the issues it did, so it was harder to insist that there was no way to eliminate substantial downtime up front. I also did not know at the beginning of the project that the export/import method could be streamlined to run in 1 day instead of several. Maybe the real lesson is that you have to work through these data migration options along with your project team and make the best choices that you can at the time based on technical issues that you discover and the business needs as they evolve with the project. This project ended with a simple data migration approach that met the company’s needs, but future projects may have different technical and business parameters and the other more complex approaches may be more appropriate to future situations.

Bobby

Categories: DBA Blogs

In database row archival

Tom Kyte - Thu, 2020-10-15 15:46
I do have a question about Oracle Indatabase archiving in Oracle 19c, given below table has a XMLtype column alter table HTS_SUMMARY_DATA_CELL row archival Error report - SQL Error: ORA-38353: This table type is not supported by the ROW ARCHIVAL clause. This error comes. The same Indatabase archiving in Oracle 12c,12.1.0.2 alter table HTS_SUMMARY_DATA_CELL row archival The table gets archived. What is the differnce there in 12C and 19C versions,
Categories: DBA Blogs

In-database row archival and compression

Tom Kyte - Thu, 2020-10-15 15:46
In Oracle 12c 12.1 version when a table is compressed, we cannot do a row archival upon that In Oracle 19c, if a table is compressed, we can do row archival , Why it is so And what's the method to do indatabase archiving for compressed tables in 12c And what would be its rolling back scripts. if don't need archiving for a compressed table
Categories: DBA Blogs

Oracle .DBF file format structure

Tom Kyte - Thu, 2020-10-15 15:46
Hi, i want to iterate all rows in table and do some operations on them, but i don't want to get data with some select statement. In order to do that i tried to export data as .dbf and read it with Java and do some stuff. But it seems Oracle .dbf format differs than xBase (https://www.clicketyclick.dk/databases/xbase/format/) .dbf format. Where can i get Oracle .dbf data file structure details. Or can you give me more suitable techniques and advice in this purpose. Thank you.
Categories: DBA Blogs

Homepage URL without "/apex/workspace/r/app"

Tom Kyte - Thu, 2020-10-15 15:46
Hi Tom and team, let's suppose my public website, built on Oracle Apex of course and running on a shared host, is at www.example.com Unfortunately, any new customers visiting "www.example.com" will be immediately redirected to a URL of something like "www.example.com/apex/workspace/r/exampleapp/app_homepage" (assuming we use friendly URLs). Is there any way to display the URL as just "www.example.com", at least for the home page? Even something like "www.example.com/app_homepage" would be great. All guidance much appreciated! Many thanks, Greg https://www.comparefootballodds.co.uk/ PS I've seen some discussion of masking, but that has various downsides (e.g. poor for SEO, and unable to bookmark pages).
Categories: DBA Blogs

Does Oracle 10g database UTL_HTTP support TLS 1.2 ?

Tom Kyte - Thu, 2020-10-15 15:46
Hi everyone Thanks in advance. I am trying to connect HTTPS API from PLSQL procedure in Oracle 10g database. I would like to connect with the server which accepts TLS 1.2 1) Does Oracle 10g support TLS 1.2 connectivity ? 2) Is there a patch which enables TLS 1.2 in Oracle 10g database. ? 3) Connecting through JVM in Oracle database 10g is not an option as our java version is java4. Any other solution as a work around kindly post . Note: I have gone through following websites link: 1) Oracle product support forum (which needs login for seeing the resource) <code> https://www.oracle.com/webfolder/community/oracle_database/3926485.html</code> 2) Burleson Consulting wesbite which states Oracle 10g supports TLS ssl protocol <code> http://www.dba-oracle.com/10g_transport_layer_security.htm</code> Thanks Naveen Mani
Categories: DBA Blogs

Upgrade to Oracle 19c – performance issue

Yann Neuhaus - Thu, 2020-10-15 08:37

In this blog I want to introduce you to a workaround for a performance issue which randomly appeared during the upgrades of several Oracle 12c databases to 19c I performed for a financial services provider. During the upgrades we ran into a severe performance issue after the upgrades of more than 40 databases had worked just fine. While most of them finished in less than one hour, we run into one which would have taken days to complete.

Issue

After starting the database upgrade from Oracle 12.2.0.1.0 to Production Version 19.8.0.0.0 the upgrade locked up during compiling:

@utlrp

 

Reason

One select-statement on the unified_audit_trail was running for hours with no result, blocking the upgrade progress and consuming nearly all database resources. The size of the audit_trail itself was about 35MB, so not the size you would expect such a bottleneck from:

SQL> SELECT count(*) from gv$unified_audit_trail;

 

Solution

After some research and testing (see notes below) I found the following workaround (after killing the upgrade process):

SQL> begin
DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
use_last_arch_timestamp => TRUE);
end;
/
SQL> set timing on;
SELECT count(*) from gv$unified_audit_trail;
exec DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL;

 

Note

As a first attempt I used the procedure below, described in Note 2212196.1.

But flush_unified_audit_trail lasted too long, so I killed the process after it ran for one hour. The flash procedure again worked fine after using clean_audit_trail as described above:

SQL> begin
DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL;
for i in 1..10 loop
DBMS_AUDIT_MGMT.TRANSFER_UNIFIED_AUDIT_RECORDS;
end loop;
end;
/

 

 

A few days later we encountered the same issue on an Oracle 12.1.0.2 database which requires Patch 25985768 for executing dbms_audit_mgmt.transfer_unified_audit_records.

This procedure is available out of the box in the Oracle 12.2 database and in the Oracle 12.1.0.2 databases which have been patched with Patch 25985768.

To avoid to get caught in this trap it is my advise that you gather all relevant statistics before any upgrade from Oracle 12c to 19c and to query gv$unified_audit_trail in advance. This query usually finishes within a few seconds.

 

Related documents

Doc ID 2212196.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=257639407234852&id=2212196.1&_afrWindowMode=0&_adf.ctrl-state=rd4zvw12p_4

Master Note For Database Unified Auditing (Doc ID 2351084.1)

Bug 18920838 : 12C POOR QUERY PERFORMANCE ON DICTIONARY TABLE SYS.X$UNIFIED_AUDIT_TRAIL

Bug 21119008 : POOR QUERY PERFORMANCE ON UNIFIED_AUDIT_TRAIL

Performance Issues While Monitoring the Unified Audit Trail of an Oracle12c Database (Doc ID 2063340.1)

Cet article Upgrade to Oracle 19c – performance issue est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator