Feed aggregator

[A NEW VERSION OF DP-200 & DP-201] Exam DP-203: Data Engineering on Microsoft Azure (beta)

Online Apps DBA - 4 hours 40 min ago

Examination DP-201 & DP-200 Will be replaced with examination DP-203 ON FEBRUARY 23, 2021. you may still be capable of earn this certification with the aid of passing DP-200 and DP-201 until they retire on June 30, 2021. Azure Data Engineers are responsible for integrating, transforming, and consolidating data from distinct structured and unstructured data […]

The post [A NEW VERSION OF DP-200 & DP-201] Exam DP-203: Data Engineering on Microsoft Azure (beta) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Connect to Oracle Cloud DB from Python

Andrejus Baranovski - Sat, 2021-01-16 09:19
A quick explanation of how to connect to Oracle Autonomous Cloud Database (Always Free instance) from Python script.

 

Fantasy Software Development – Story Finding

The Anti-Kyte - Sat, 2021-01-16 08:56

As threatended promised, I’m going to look at the first phase of my Application – recording the results and outcomes of Soccer tournaments from the first days of the sport in 1871 through to the culmination of the first Football League Championship in 1889.

I’ll begin with a narrative description of the functional requirements of our application. OK, it’s more like a potted history of the early days of Association Football, but I’ve got to start somewhere.

I’ll use this to extrapolate some user stories, which I’ll then drop into Altassian Jira having taken advantage of a free Jira account.

If you’re an Oracle developer and reading this is your first experience of Scrum, then you may feel that it’s not an obvious fit for developing a data-centric application.
On the other hand, if you’re a Scrum officianado, you may be faintly horrified by the “free-form” way I’m approaching Scrum. So, something to annoy everyone then…

Association Football – The Early Years

Those of you less than enamoured of “the Beautiful Game” will not be unduly surprised to learn that it all started with a bunch of blokes in a pub…

On 26th October 1863, at the Freemason’s Tavern in London, the Football Association (FA) first came into being.
Despite this, the rules observed by it’s member clubs when playing the game remained somewhat fragmented.
There were several versions to choose from and clubs would agree a set of rules to adhere to when playing each other.

In an attempt to promote the “Association” rules ( adopted by the FA), Charles Alcock, Secretary of the FA came up with the idea of a Cup competition between all of the clubs affiliated to the Association.
All matches in this new competition would be played under this single set of rules.
Thus it was that Association Football ( soccer) really came into being several years after it’s governing body was founded.
To give a flavour of how fundamental this change was, the Rules established such matters as the number of players per side (11), and the duration of a match ( 90 minutes), which had not been consistent across the various football codes then extant.

The FA CUP

The first Tournament duly took place in the 1871/72 season.
The format can best be described as “sort of knock-out”.
Whilst the winners of a tie between two competing teams would advance to the next stage of the competition, this was not the only method of progression. There was, of course, the possibility of being awarded a bye, a free-pass to the next round if there were an odd number of teams in the current round.
Teams could also be awarded a Walkover, if their designated opponents withdrew.
Additionally, at the discretion of the competition’s organising committee, teams could advance if :
– they drew their fixture ( both teams could go through)
– they could not agree a date and venue with the opponents against whom they had been drawn.
Eventually, the 14 entrants were whittled down to Wanderers, who defeated Royal Engineers 1-0 in the Final, played on 16th March 1872 at the Kennington Oval in London.

Originally, the intention was for the cup holders to defend the trophy from challengers, hence the competition’s full name – The Football Association Challenge Cup.
For the 1872/73 tournament, Wanderers were given a bye all the way through to the final with the remaining matches being essentially and elimination event to find a challenger.
Wanderers were also given choice of venue and – perhaps unsurpisingly – managed to retain the trophy with a 2-0 win over Oxford University.

It’s only from 1873/74 that the competition settles down into a consistent knock-out format.

For the first 10 years of competition, the southern amateur teams dominated, and it was not until 1882 that a team from the north of England appeared in the final.
That year, Old Etonians saw off the challenge of Blackburn Rovers. It was to prove the end of an era.
In subsequent years the centre of power changed radically, Rovers’ local rivals Blackburn Olympic won the trophy the following season after which Rovers themselves, won three consecutive finals.

The Football League

By 1888, football in England was dominated by professional clubs in the North and Midlands. 12 such clubs formed the Football League and participated in the first season.
The League format consisted of each team playing all of the others, once on their home ground and once on the opponents’ ground.
For each match won, 2 points were awarded.
For a draw, one point was awarded.
No points were awarded for a defeat.
Teams finishing level on points would be separated by Goal Average.

Goal Average was calculated by dividing the number of goals scored over the course of the season by the number of goals conceded.

International Football

The first international took place on 30th November 1872 in Glasgow, when Scotland and England drew 0-0.
As the game spread to Wales and Ireland (which was then a single entity and part of the United Kingdom), matches between the four home nations became a regular occurrence. However, each association observed slightly different rules and this was the cause of some friction.
Eventually, the International Football Conference was convened in Manchester in December 1882. It was at this meeting where a common set of rules were agreed.
The first full season in which these rules were applied was 1883/84 and it’s subsequently been acknowledged that the matches played between the Home nations in that season comprised the inaugural British Home Championship – the first international soccer tournament.
The format was a round-robin with each nation playing the other once with home and away fixtures alternating between years.
The outcome of the tournament was decided using the same criteria as for the Football League, with the exception that Goal Average was not applied and teams finishing level on points were considered to be tied.
Given the influence of Scottish players on the development of the game, it’s little surprise that Scotland won the inaugural championship. Indeed, Scotland only failed to win one of the first six tournaments up to and including 1888/89.

I should point out that applying this format to the Home International Championship is slightly anachronistic.
Contemporary reports don’t include league tables and these seem to have been applied several years later, under the influence of the format adopted by the Football League.
For our purposes however, we’re going to proceed on the basis that this format was in place from the outset.

User Stories

To start with, I’ve identified three distinct user roles :

  • Football Administrator – generally runs the FA and sets up Competitions
  • Tournament Organiser – manages tournaments
  • Record Keeper – enters fixture data, including results and maintains league tables

The stories are written from the perspective of these users.

Football Administrator Stories
Title : Create Competition
 As a : Football Administrator 
 I would like to : create a competition
 so that : I can provide a structured environment for teams to participate
Title : Administer Teams
 As a : Football Administrator
 I would like to : Maintain details of teams 
 So that : I know which teams are eligible to enter competitions
Title : Administer Venues
 As a : Football Administrator
 I would like to : Maintain details of venues where matches can be played
 So that : I know where teams can fulfil fixtures.
Tournament Organiser Stories
Title : Create Tournament
 As a : Tournament Orgainiser
 I would like to : create a tournament
 so that : teams can compete against each other
Title : Specify a format for a tournament
 As a : Tournament Organiser
 I would like to : specify the format for a tournament
 so that : I can anticipate what fixtures may be played
Title : Define Tournament Rules
 As a : Tournament Organiser
 I would like to : Define the rules for the tournament
 So that : I know how to determine the outcome of the tournament
Title : Enter Teams
 As a : Tournament Organiser
 I would like to : Accept the entry of teams into a tournament
 So that : I know which teams are competing in a tournament
Title : Assign Tournament Venues
 As a : Tournament Organiser
 I would like to : Assign venues to a tournament
 So that : I know where tournament fixtures may be played
Title : Assign Players to Tournament Teams
 As a : Tournament Organiser
 I would like to : assign players to a team 
 So that : I know which players are playing for which team in a tournament
Title : Override Tournament Fixture Results
 As a : Tournament Organiser
 I would like to : override results of fixtures
 So that : I can account for exceptional circumstances
Title : Teams remaining in a tournament
 As a : Tournament Organiser
 I would like to : identify the teams still in a knock-out tournament
 so that : I know which teams will be in the draw for the next round
Title : Future fixtures
 As a : Tournament Organiser
 I would like to : add tournament fixtures even when the teams are not known
 So that : I can anticipate which future fixtures are required for the tournament to be completed.
Record keeper stories
As a : Record Keeper
 I would like to : record details of a fixture 
 so that : I can see what happened during a match
As a : Record Keeper
 I would like to : view a league table based on results so far
 So that : I know how teams are performing relative to each other in a League or round-robin tournament
As a : Record Keeper
 I would like to : record which team won and which teams were placed in a competition
 so that : I can maintain a Roll of Honour

Whilst this is all a bit rough-and-ready, it does give us a backlog to start working from.

Project Management Software

Now, how best to track our progress ? At this point, I did start to wonder whether I could throw together a quick application as installing Jira was likely to take time and incur cost that I didn’t really have on this project.

Fortunately, Altassian saved me the bother as they provide a free Jira cloud account.

Consequently, my backlog is now just waiting for Sprint 1 to start, which will be the subject of my next post on this topic.

Introduction To Google Cloud Platform

Online Apps DBA - Fri, 2021-01-15 07:20

Google Cloud is a suite of Cloud Computing services offered by Google that provides various services like compute, storage, networking, and many more that run on the same infrastructure that Google uses internally for end-users like Gmail, Google Photos, and YouTube. There are many services and tools offered by Google Cloud like Storage, Big Data, […]

The post Introduction To Google Cloud Platform appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

AWS Database Migration Service

Online Apps DBA - Fri, 2021-01-15 07:05

Are you looking for a way to migrate your on-premise database to the cloud? AWS Database Migration Service (DMS) is a managed service that provides a quick and secure way to migrate your on-premise databases to the cloud. Check out this blog at k21academy.com/awssa34 to know more about AWS Database Migration Service: • What is […]

The post AWS Database Migration Service appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Partner Webcast – Integration Insight in Oracle Integration

Today’s competitive market demands that stakeholders understand, monitor, and react to rapidly changing conditions. Businesses need flexible, dynamic, and detailed insight – and they need...

We share our skills to maximize your revenue!
Categories: DBA Blogs

DIFFERENCE BETWEEN ANALYZE AND DBMS_STATS

Tom Kyte - Thu, 2021-01-14 12:46
DIFFERENCE BETWEEN ANALYZE AND DBMS_STATS
Categories: DBA Blogs

Is there a view that a DBA can query to find out if "ORA-02393: exceeded call limit on CPU usage"

Tom Kyte - Thu, 2021-01-14 12:46
Greetings, I've seen when "cpu_per_call" limit is reached. ORA-02393 is sent to the SQL Plus. Is there a view that a DBA can query to find out if "ORA-02393: exceeded call limit on CPU usage" occurs to applications using the database since it isn't written to alert log. Thanks, John
Categories: DBA Blogs

MATERIALIZED VIEW Performance Issue!

Tom Kyte - Thu, 2021-01-14 12:46
I have created a MV on UAT server and my MV view using a query which has remote connectivity to PROD and select only rights to these tables which has millions of rows around 10 lakhs in each table but after calculation output of query is 139-150 rows only. query alone without MViews is taking 60 seconds but when I use CREATE MATERIALIZED VIEW NOCOMPRESS NOLOGGING BUILD IMMEDIATE USING INDEX REFRESH FORCE ON DEMAND NEXT null USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS DISABLE QUERY REWRITE as "query" mview creation happens in one hour and after that refresh time is 20-30 minutes ? which is surely not acceptable as this data is being used for dashboard with 3 minutes delay which MV should take time to refresh! I don't have privilege to anything to check on prod DB but on UAT I have sufficient access! I have tried many option but didn't work so please help me to know what is solution and if no solution what is reason behind this? in addition when my mview refresh it shows in explain plan " INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO abc". Please help me! I am really stuck here and tried my hard to get it resolved or finding a reason where I can explain to relevant team! Please help! 1. I have tried create table with same query and it took less than a minute. 2. Insert statement also working fine taking same time. 3. I tried MV view refresh option with atomic_refresh=false as well but it didn't work and actually it will not help! Please let me know if u have any info required! Note: My mv view query using prod tables(approx 4 tables) with db link from UAT.Prod server has one separate user which has been given below table rights select count(*) from abc@prod; --800000 select count(*) from abc1@prod; --700000 select count(*) from abc2@prod; --200000
Categories: DBA Blogs

blob to clob on ORDS Handler Definition

Tom Kyte - Thu, 2021-01-14 12:46
Hi! I'm trying to send a post request with json: <code> { "id": 12344444, "email": "ppppoddddddppp@gmail.com", "first_name": "", "last_name": "", "billing": { "first_name": "22222", "last_name": "", "company": "", "address_1": "", "address_2": "", "city": "", "postcode": "", "country": "", "state": "", "email": "", "phone": "" } } </code> I'm trying to use apex_json to extract information like: ?company? that is in ?billing? I read the following guide:https://oracle-base.com/articles/misc/apex_json-package-generate-and-parse-json-documents-in-oracle#parsing-json and it works but not inside ORDS Handler Definition.... I'm trying to use the following code ... but it's not insert the data and return "201": <code> DECLARE l_json_payload clob; l_blob_body blob := :body; l_dest_offset integer := 1; l_src_offset integer := 1; l_lang_context integer := dbms_lob.default_lang_ctx; l_warning PLS_INTEGER := DBMS_LOB.warn_inconvertible_char; BEGIN if dbms_lob.getlength(l_blob_body) = 0 then :status_code := 400; --error :errmsg := 'Json is empty'; return; end if; dbms_lob.createTemporary(lob_loc => l_json_payload ,cache => false); dbms_lob.converttoclob( dest_lob => l_json_payload ,src_blob => l_blob_body ,amount => dbms_lob.lobmaxsize ,dest_offset => l_dest_offset ,src_offset => l_src_offset ,blob_csid => dbms_lob.default_csid ,lang_context => l_lang_context ,warning => l_warning); APEX_JSON.parse(l_json_payload); INSERT INTO ACCOUNTS ( wp_id , name , email , f_name , l_name , wp_role , wp_username , woo_is_paying_customer , woo_billing_first_name ) VALUES ( :id, :first_name || ' ' || :last_name, :email, :first_name, :last_name, :role, :username, decode(:is_paying_customer,'false', 'N', 'Y'), APEX_JSON.get_varchar2(p_path => 'billing.first_name') ); :status_code := 201; --created EXCEPTION WHEN OTHERS THEN :status_code := 400; --error :errmsg := SQLERRM; END; </code> updating: After testing - the problem is in this line: <code> l_blob_body blob := :body; </code> When I enter it, it does not insert anything into a database update 2: after testing... I realized that it is not possible to combine: : body and other bind value, so APEX_JSON.get_varchar2 should be used instead (p_path => 'billing.first_name') So the problem was solved
Categories: DBA Blogs

How to pass a parameter to a GET Handler in APEX?

Tom Kyte - Thu, 2021-01-14 12:46
Hello, I created a PL/SQL function that returns a list of open balances as a table result, where all amounts are converted to the currency provided as an input parameter: <code>function my_pkg.my_func (pi_currency in NUMBER default NULL) return amount_tab pipelined; </code> I created an Oracle REST Data Service with only GET handler: <code>select * from table(my_pkg.my_func(:to_currency)) ;</code> I tested it by Advanced REST Client and it is working as expected with an additional header for the to_currency parameter. In APEX I declared a REST Data Source related to the above REST service, then I made an APEX page with IG region based on the above REST source and it is working well as long as I am not trying to provide a parameter, i.e. until to_currency is null. When I try to populate <b>{"to_currency":"USD"}</b> in the External Filter attribute, this causes the application crash. I googled the problem but found nothing. Is any other standard way to pass the non-column parameter to the GET handler in APEX or I should write my own procedure to call REST service, e.g. by using APEX_EXEC? Thank you and best regards, Alex
Categories: DBA Blogs

Requirements to set up an Oracle Directory for WRITE access

Tom Kyte - Thu, 2021-01-14 12:46
We have several existing Oracle Directories set up to allow reading CSV files that work fine, and a couple of them work OK to Write new files. I have been trying to add a new Directory definition pointing to a different path and cannot get it to work. I am in a corporate environment where I don't have access to the System accounts and cannot see the instance startup file, and don't have direct access to the Linux operating system, so I don't know what setup has been done for the previous Directories. One of the existing Directories that works for both read and write is defined as: <code>CREATE OR REPLACE DIRECTORY RED AS '/red/dev';</code> for the above directory, the following test code works fine to create an output file: <code>DECLARE v_file UTL_FILE.FILE_TYPE; BEGIN v_file := UTL_FILE.FOPEN(location => 'RED', filename => 'test.csv', open_mode => 'w', max_linesize => 32767); UTL_FILE.PUT_LINE(v_file, 'A,123'); UTL_FILE.FCLOSE(v_file); END; </code> I want to write some files to a subdirectory under the above path, and have found that Oracle will only allow WRITE to a named-Oracle Directory for security reasons. A new Directory I want to create is defined as: <code>CREATE OR REPLACE DIRECTORY RED_OUTPUT AS '/red/dev/OUTPUT';</code> But changing the code above to use RED_OUTPUT as the "location" or directory, results in "ORA-29283: invalid file operation: cannot open file". The '/red/dev/OUTPUT' directory location exists on the external NAS filesystem and appears to have the same permissions as the parent '/red/dev' directory (as best I can tell by using Windows Explorer to look at the directory security properties). I have read various posts online indicating things like the Oracle instance must be restarted after defining a new Oracle Directory, or that every path specified by an Oracle Directory must have a separate "mount point" on the Oracle Linux server, but I don't have easy access to do those things. The RED_OUTPUT directory can be currently used to READ an existing file if I copy one to that location using Windows Explorer. What is likely the issue with not being able to WRITE to this new RED_OUTPUT directory, and are any of these additional steps (restart, mounting, etc) necessary to make this work?
Categories: DBA Blogs

Get date filter from a table in Oracle?

Tom Kyte - Thu, 2021-01-14 12:46
I would like to know how to access date column from a table and use it as date filter for another large volume table.. I have the following query that currently uses a date in the filter criteria and it gets completed in about 10 to 15 minutes. <code>select a,b,datec, sum(c) from table1 where datec = date '2021-01-12' group by a,b,datec</code> I'm trying to replace the hard coded date with a date from another table called table2. It's a small table with 1600 rows that just returns latest cycle completion date (one value) which is typically today's date minus one day for most days except for holidays when the cycle doesn't run.table1 is a view and it returns millions of rows. I tried the following queries in order to get the date value in the filter condition: <code>select a,b,datec, sum(c) from table1 t1, table2 t2 where t1.datec = t2.pdate and t2.prcnm = 'TC' group by a,b,datec select a,b,datec, sum(c) from table1 t1 inner join table2 t2 on datec = t2.pdate and t2.prcnm = 'TC' group by a,b,datec select a,b,datec, sum(c) from table1 t1 where t1.datec = (SELECT t2.date FROM table2 t2 WHERE prcnm = 'TC') group by a,b,datec</code> I also tried this hint: <code>select a,b,datec, sum(c) from table1 t1 where t1.datec = (SELECT /*+ PRECOMPUTE_SUBQUERY */ t2.date FROM table2 t2 WHERE prcnm = 'TC') group by a,b,datec</code> The above queries take too long and eventually fail with this error message - "parallel query server died unexpectedly" I am not even able to get 10 rows returned when I use the date from table2. I confirmed that table2 returns only one date and not multiple dates. Can you please help me in understanding why the query works when hard coded date is used, but not when a date from another table is used? thank you.
Categories: DBA Blogs

Understanding reused values for sql_id, address, hash_value, plan_hash_value in v$sqlarea

Tom Kyte - Thu, 2021-01-14 12:46
good evening, I have a sql statement with the following information in v$sqlarea <code>select sql_id, address, hash_value, plan_hash_value from v$sqlarea where sqltext=<string to identify my query> sql_id |address |hash_value|plan_hash_value cv65zdurrtfus|00000000FCAA9560|2944187224|3149222761</code> I remove this object from the shared pool with the command because I want to recompute the exec plan for my sql statement <code>exec sys.dbms_shared_pool.purge('00000000FCAA9560,2944187224','c');</code> I redo my previous select statement on v$sqlarea and it retuns 0 row so I'm happy with that. Then I execute my original sql and last I redo my select statement on v$sqlarea and it returns one row with the same values <code>sql_id |address |hash_value|plan_hash_value cv65zdurrtfus|00000000FCAA9560|2944187224|3149222761</code> I was wondering how identical ids were generated, i was expecting new values even though at the end I have the expected result. Thanks for your feedback. Simon
Categories: DBA Blogs

Configuring Transparent Data Encryption -- 2 : For Columns

Hemant K Chitale - Thu, 2021-01-14 05:15
The previous demo of TDE in 19c was for a full Tablespace (converting an existing, non-TDE, Tablespace to an Encrypted Tablespace).

Pre-creating a Table with an Encrypted column would be straightforward :

CREATE TABLE employees (
emp_id number primary key,
first_name varchar2(128),
last_name varchar2(128),
national_id_no varchar2(18) encrypt,
salary number(6) )
tablespace hr_data
/


This encrypts the column with the AES encryption algorithm with a 192-bit key length ("AES192").

But what if you want to encrypt an existing, non-encrypted column ? You can use the MODIFY clause.

ALTER TABLE employees (
MODIFY (national_id_no encrypt)
/


A quick demo :

SQL> create tablespace hr_data datafile '/opt/oracle/oradata/HEMANT/HR_DATA.dbf' size 5M;

Tablespace created.

SQL> CREATE TABLE employees (
2 emp_id number primary key,
3 first_name varchar2(128),
4 last_name varchar2(128),
5 national_id_no varchar2(18),
6 salary number(6) )
7 tablespace hr_data;

Table created.

SQL> ^C

SQL> insert into employees
2 select rownum, 'Hemant', 'Hemant' || to_char(rownum), dbms_random.string('X',12), 1000
3 from dual
4 connect by level "less than" 21 --- "less than" symbol replaced by string to preserve HTML formatting
5 /

20 rows created.

SQL> commit;

Commit complete.

SQL> alter system checkpoint;

System altered.

SQL> !sync ; sync

SQL>
SQL> !strings -a /opt/oracle/oradata/HEMANT/HR_DATA.dbf | more
}|{z
HEMANT
3J?5
HR_DATA
H4J?
AAAAAAAA
Hemant
Hemant1
LH6RUZRISE11
Hemant
Hemant2
DFIN8FZ7B6J0
Hemant
Hemant3
PLJ1R2QYRG2C
Hemant
Hemant4
UT3HB9ALF3B5
Hemant
Hemant5
LQMDUTFB2PTM
Hemant
Hemant6
1IGKV4E78M5J
Hemant
Hemant7
P9TQAV5BC5EM
Hemant
Hemant8
V69U6VZWCK26
Hemant
Hemant9
EOTOQHOB0F45
Hemant
Hemant10
OKMEV89XOQE1
Hemant
Hemant11
0D4L77P3YNF0
Hemant
Hemant12
CTMCLJSKQW82
Hemant
Hemant13
49T0AG7E2Y9X
Hemant
Hemant14
ODEY2J51D8RH
Hemant
Hemant15
R1HFMN34MYLH
Hemant
Hemant16
OXI0LOX161BO
Hemant
Hemant17
2XL44ZJVABGW
Hemant
Hemant18
4BIPWVECBWYO
Hemant
Hemant19
732KA25TZ3KR
Hemant
Hemant20
NN0X92ES90PH
AAAAAAAA

SQL> alter table employees
2 MODIFY (national_id_no encrypt)
3 /

Table altered.

SQL> alter system checkpoint;

System altered.

SQL> !sync ; sync

SQL>

SQL> select emp_id, national_id_no
2 from employees
3 order by 1
4 /

EMP_ID NATIONAL_ID_NO
---------- ------------------
1 LH6RUZRISE11
2 DFIN8FZ7B6J0
3 PLJ1R2QYRG2C
4 UT3HB9ALF3B5
5 LQMDUTFB2PTM
6 1IGKV4E78M5J
7 P9TQAV5BC5EM
8 V69U6VZWCK26
9 EOTOQHOB0F45
10 OKMEV89XOQE1
11 0D4L77P3YNF0
12 CTMCLJSKQW82
13 49T0AG7E2Y9X
14 ODEY2J51D8RH
15 R1HFMN34MYLH
16 OXI0LOX161BO
17 2XL44ZJVABGW
18 4BIPWVECBWYO
19 732KA25TZ3KR
20 NN0X92ES90PH

20 rows selected.

SQL>
SQL> select emp_id, dump(national_id_no) col_dump
2 from employees
3 order by emp_id
4 /

EMP_ID COL_DUMP
---------- ------------------------------------------------------
1 Typ=1 Len=12: 76,72,54,82,85,90,82,73,83,69,49,49
2 Typ=1 Len=12: 68,70,73,78,56,70,90,55,66,54,74,48
3 Typ=1 Len=12: 80,76,74,49,82,50,81,89,82,71,50,67
4 Typ=1 Len=12: 85,84,51,72,66,57,65,76,70,51,66,53
5 Typ=1 Len=12: 76,81,77,68,85,84,70,66,50,80,84,77
6 Typ=1 Len=12: 49,73,71,75,86,52,69,55,56,77,53,74
7 Typ=1 Len=12: 80,57,84,81,65,86,53,66,67,53,69,77
8 Typ=1 Len=12: 86,54,57,85,54,86,90,87,67,75,50,54
9 Typ=1 Len=12: 69,79,84,79,81,72,79,66,48,70,52,53
10 Typ=1 Len=12: 79,75,77,69,86,56,57,88,79,81,69,49
11 Typ=1 Len=12: 48,68,52,76,55,55,80,51,89,78,70,48
12 Typ=1 Len=12: 67,84,77,67,76,74,83,75,81,87,56,50
13 Typ=1 Len=12: 52,57,84,48,65,71,55,69,50,89,57,88
14 Typ=1 Len=12: 79,68,69,89,50,74,53,49,68,56,82,72
15 Typ=1 Len=12: 82,49,72,70,77,78,51,52,77,89,76,72
16 Typ=1 Len=12: 79,88,73,48,76,79,88,49,54,49,66,79
17 Typ=1 Len=12: 50,88,76,52,52,90,74,86,65,66,71,87
18 Typ=1 Len=12: 52,66,73,80,87,86,69,67,66,87,89,79
19 Typ=1 Len=12: 55,51,50,75,65,50,53,84,90,51,75,82
20 Typ=1 Len=12: 78,78,48,88,57,50,69,83,57,48,80,72

20 rows selected.

SQL>

SQL> !strings -a /opt/oracle/oradata/HEMANT/HR_DATA.dbf | more
}|{z
HEMANT
3J?5
HR_DATA
AAAAAAAA
Hemant
Hemant204
Hemant
Hemant194
Hemant
Hemant184
Hemant
Hemant174[Q#
Hemant
Hemant164
Hemant
Hemant154
$^?[
Hemant
Hemant1448
Hemant
Hemant134
Hemant
Hemant124
Hemant
Hemant114
Hemant
Hemant104J
Hemant
Hemant94
Hemant
Hemant84M
zCAGp
Q(ru
Hemant
Hemant74
$o7tN
Hemant
Hemant6418
( i+W
Hemant
Hemant54
f(cCL
Hemant
Hemant44
Hemant
Hemant34
Hemant
Hemant24
e{_
Hemant
Hemant14
Hemant
Hemant1
LH6RUZRISE11
Hemant
Hemant2
DFIN8FZ7B6J0
Hemant
Hemant3
PLJ1R2QYRG2C
Hemant
Hemant4
UT3HB9ALF3B5
Hemant
Hemant5
LQMDUTFB2PTM
Hemant
Hemant6
1IGKV4E78M5J
Hemant
Hemant7
P9TQAV5BC5EM
Hemant
Hemant8
V69U6VZWCK26
Hemant
Hemant9
EOTOQHOB0F45
Hemant
Hemant10
OKMEV89XOQE1
Hemant
Hemant11
0D4L77P3YNF0
Hemant
Hemant12
CTMCLJSKQW82
Hemant
Hemant13
49T0AG7E2Y9X
Hemant
Hemant14
ODEY2J51D8RH
Hemant
Hemant15
R1HFMN34MYLH
Hemant
Hemant16
OXI0LOX161BO
Hemant
Hemant17
2XL44ZJVABGW
Hemant
Hemant18
4BIPWVECBWYO
Hemant
Hemant19
732KA25TZ3KR
Hemant
Hemant20
NN0X92ES90PH
AAAAAAAA


SQL> select version, version_full from v$instance;

VERSION VERSION_FULL
----------------- -----------------
19.0.0.0.0 19.3.0.0.0

SQL>


When I insert a new row, the plain-text for this is not present. But the old (20) rows plain-text is still present.

SQL> insert into employees
2 values (21,'HemantNew','HemantNew21','ABCDEFGHIJ88',2000);

1 row created.

SQL> commit;

Commit complete.

SQL> alter system checkpoint;

System altered.

SQL> !sync;sync

SQL> !strings -a /opt/oracle/oradata/HEMANT/HR_DATA.dbf
}|{z
HEMANT
3J?5
SJ?
HR_DATA
UTJ?
AAAAAAAA
HemantNew
HemantNew214S
Hemant
Hemant204
Hemant
Hemant194
Hemant
Hemant184
Hemant
Hemant174[Q#
Hemant
Hemant164
Hemant
Hemant154
$^?[
Hemant
Hemant1448
Hemant
Hemant134
Hemant
Hemant124
Hemant
Hemant114
Hemant
Hemant104J
Hemant
Hemant94
Hemant
Hemant84M
zCAGp
Q(ru
Hemant
Hemant74
$o7tN
Hemant
Hemant6418
( i+W
Hemant
Hemant54
f(cCL
Hemant
Hemant44
Hemant
Hemant34
Hemant
Hemant24
e{_
Hemant
Hemant14
Hemant
Hemant1
LH6RUZRISE11
Hemant
Hemant2
DFIN8FZ7B6J0
Hemant
Hemant3
PLJ1R2QYRG2C
Hemant
Hemant4
UT3HB9ALF3B5
Hemant
Hemant5
LQMDUTFB2PTM
Hemant
Hemant6
1IGKV4E78M5J
Hemant
Hemant7
P9TQAV5BC5EM
Hemant
Hemant8
V69U6VZWCK26
Hemant
Hemant9
EOTOQHOB0F45
Hemant
Hemant10
OKMEV89XOQE1
Hemant
Hemant11
0D4L77P3YNF0
Hemant
Hemant12
CTMCLJSKQW82
Hemant
Hemant13
49T0AG7E2Y9X
Hemant
Hemant14
ODEY2J51D8RH
Hemant
Hemant15
R1HFMN34MYLH
Hemant
Hemant16
OXI0LOX161BO
Hemant
Hemant17
2XL44ZJVABGW
Hemant
Hemant18
4BIPWVECBWYO
Hemant
Hemant19
732KA25TZ3KR
Hemant
Hemant20
NN0X92ES90PH
AAAAAAAA

SQL>



So, it seems that after I ran the MODIFY to encrypt a column, Oracle created new copies of the 20 rows with encrypted values.  However, the old plain-text (non-encrypted) values are still present in the datafile.

Apparently, those "still present" plain-text representations of the "NATIONAL_ID_NO" column in the datafile are explained in the documentation as :

"Column values encrypted using TDE are stored in the data files in encrypted form. However, these data files may still contain some plaintext fragments, called ghost copies, left over by past data operations on the table. This is similar to finding data on the disk after a file was deleted by the operating system."

You should remove old plaintext fragments that can appear over time.

Old plaintext fragments may be present for some time until the database overwrites the blocks containing such values. If privileged operating system users bypass the access controls of the database, then they might be able to directly access these values in the data file holding the tablespace.

To minimize this risk:

  1. Create a new tablespace in a new data file.

    You can use the CREATE TABLESPACE statement to create this tablespace.

  2. Move the table containing encrypted columns to the new tablespace. You can use the ALTER TABLE.....MOVE statement.

    Repeat this step for all of the objects in the original tablespace.

  3. Drop the original tablespace.

    You can use the DROP TABLESPACE tablespace INCLUDING CONTENTS KEEP DATAFILES statement. Oracle recommends that you securely delete data files using platform-specific utilities.

  4. Use platform-specific and file system-specific utilities to securely delete the old data file. Examples of such utilities include shred (on Linux) and sdelete (on Windows).

Categories: DBA Blogs

Between

Jonathan Lewis - Thu, 2021-01-14 05:07

Reading Richard Foote’s latest blog note about automatic indexing and “non-equality” predicates I was struck by a whimsical thought about how the optimizer handles “between” predicates. (And at the same time I had to worry about the whimsical way that WordPress treats “greater than” and “less than” symbols.)

It’s probably common knowledge that if your SQL has lines like this:

columnA between {constant1} and {constant2}

the optimizer will transform them into lines like these:

    columnA >= {constant1}
and columnA <= {constant2}

The question that crossed my mind – and it was about one of those little details that you might never look at until someone points it out – was this: “does the optimizer get clever about which constant to use first?”

The answer is yes (in the versions I tested). Here’s a little demonstration:

rem
rem     Script:         between.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2021
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
select
        rownum  rn,
        ao.*
from
        all_objects ao
where
        rownum <= 50000
;

set autotrace traceonly explain

select  object_name
from    t1
where
        rn between 45 and 55
;


select  object_name
from    t1
where
        rn between 49945 and 49955
;


select  object_name
from    t1
where
        rn between 24945 and 24955
;

select  object_name
from    t1
where
        rn between 25045 and 25055
;

set autotrace off

All I’ve done is create a table with 50,000 rows and a column that is basically a unique sequence number between 1 and 50,000. Then I’ve checked the execution plans for a simple query for 11 rows based on the sequence value – but for different ranges of values.

Two of the ranges are close to the low and high values for the sequence; two of the ranges are close to, but either side of, the mid-point value (25,000) of the sequence. The big question is: “does the execution plan change with choice of range?”. The answer is Yes, and No.

No … because the only possible execution path is a full tablescan

Yes … because when you examine the plan properly you’ll notice a change in the Predicate Information. Here are the first two execution plans produced by the calls to dbms_xplan.display():

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   528 |   140   (5)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |    12 |   528 |   140   (5)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN"<=55 AND "RN">=45)

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   528 |   140   (5)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |    12 |   528 |   140   (5)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=49945 AND "RN"<=49955)

Notice how the order of the filter predicates has changed as we move from one end of the range to the other. The optimizer has decided do the test that is more likely to fail first, and the test that is more likely to succeed second (which means there won’t be many rows where it has to run both tests which will make a small difference in the CPU usage).

Picking out just the filter predicate line from the output for this script (host grep filter between.lst) you can see the same pattern appear when the values supplied are very close to the mid-point (25,000).

SQL> host grep filter between.lst
   1 - filter("RN"<=55 AND "RN">=45)
   1 - filter("RN">=49945 AND "RN"<=49955)
   1 - filter("RN"<=24955 AND "RN">=24945)
   1 - filter("RN">=25045 AND "RN"<=25055)

My code has used literal values to demonstrate an effect. It’s worth checking whether we would still see the same effect if we were using bind variables (and bind variable peeking were enabled). So here’s a little more of the script:

set serveroutput off

variable b1 number
variable b2 number

exec :b1 := 45
exec :b2 := 55

select
        /* low_test */
        object_name
from    t1
where
        rn between :b1 and :b2
/

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));

exec :b1 := 49945
exec :b2 := 49955

select
        /* high_test */
        object_name
from    t1
where
        rn between :b1 and :b2
/

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));
set serveroutput on

Since autotrace simply calls “explain plan” and doesn’t know anything about bind variables (treating them as unpeekable character strings) I’ve used code that executes the statements and pulls the plans from memory. Here are the results (with some of the script’s output deleted):

EXPLAINED SQL STATEMENT:
------------------------
select  /* low_test */  object_name from t1 where  rn between :b1 and :b2

Plan hash value: 3332582666

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  FILTER            |      |
|*  2 |   TABLE ACCESS FULL| T1   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:B2>=:B1)
   2 - filter(("RN"<=:B2 AND "RN">=:B1))


EXPLAINED SQL STATEMENT:
------------------------
select  /* high_test */  object_name from t1 where  rn between :b1 and :b2

Plan hash value: 3332582666

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  FILTER            |      |
|*  2 |   TABLE ACCESS FULL| T1   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:B2>=:B1)
   2 - filter(("RN">=:B1 AND "RN"<=:B2))

As you can see, when we query the low value the first comparison is made against :b2, when we query the high range the first comparison is made against :b1.

It is actually worth knowing that this can happen. How many times have you heard the question: “the plan’s the same, why is the performance different?”. Maybe the body of the plan looks the same and has the same plan_hash_value, but today the first person to execute the query supplied bind values that made the optimizer choose to apply the filters in the opposite order to usual. This probably won’t make much difference to CPU usage in most cases there are bound to be a few cases where it matters.

You’ll notice, by the way, that the plan with bind variables includes a FILTER operation that doesn’t appear in the plans with literal values. This is an example of “conditional SQL” – if you check the predicate information for operation 1 you’ll see that it’s checking that :b2 is greater than :b1, if this test doesn’t evaluate to true then operation 1 will not make a call to operation 2, i.e. the tablescan is in the plan but won’t happen at run-time.

(I believe that there may be some RDBMS which will treat (e.g.) “X between 20 and 10” as being identical to “X between 10 and 20” – Oracle doesn’t.)

Left as an exercise

The test data was created as a completely evenly spaced (by value) and evenly distributed (by count) set of values. How would things change if the data were sufficiently skewed that the optimizer would default to creating a histogram when gathering stats.

Left as another exercise**

There are lots of little bits of arithmetic that go into the CPU_COST component of an execution plan – including a tiny factor to allow for the number of columns that Oracle has to “step over” (by counting bytes) as it projects the columns needed by the query; so if you had a second “between” predicate on another column in the table, could you manage to get all 24 possible orders for the 4 transformed predicates by adjusting the ranges of the between clauses and/or moving the two columns to different positions in the row.

** For those in lockdown who need something to do to fill the time.

Oracle 19c Automatic Indexing: Non-Equality Predicates Part I (Lucy Can’t Dance)

Richard Foote - Thu, 2021-01-14 00:43
  I’ve been waiting a while before posting a series on the various limitations associated with Automatic Indexing, in order to see how the feature matures over time. The following have all been re-tested post 1 January 2021 on the Autonomous ATP Database Cloud service, using Oracle Database version 19.5.0.0.0. In the Oracle Documentation (including […]
Categories: DBA Blogs

The APEX Journey Continues

Joel Kallman - Wed, 2021-01-13 10:51

Today, Oracle formally launched the Oracle APEX Application Development Service. This is a brand new service on Oracle Cloud, squarely targeted at application developers. You can read all about it in this blog post, and here is the press release.

Oracle APEX is unique among frameworks and even low code platforms.  The architecture is "radical", as an Oracle executive described it.  Using just a little SQL, you can build wonderfully rich applications against virtually any type of data, and your apps can scale to thousands and thousands of users.

This all started back in 1999, and I was fortunate to be employee #1 under Michael Hichwa. It was Mike's vision and ingenuity which launched Oracle APEX, and it began with just the two of us. After 5 years of proving it with real-world applications, both inside of Oracle and outside, it was launched as Oracle HTML DB in 2004 - as a feature of Oracle Database. It took a lot of work and a little executive nudging to make this happen. Over the past 17 years, we've added some of the smartest people in the world to the team, and they have helped evolve Oracle APEX to what it is today. Another brilliant team at Oracle delivered on a vision which is realized today in Oracle REST Data Services (ORDS), and this is what powers APEX everywhere.  ORDS is a powerful and enabling technology.  We've developed numerous internal applications at Oracle, and we've also helped thousands of others at Oracle to develop their own solutions too. We've made mistakes along the way, but we've learned from them too. We became the advocate of our customers within Oracle, and we also developed a close, personal relationship with our community - hundreds if not thousands of them.

During the pandemic of 2020, our team was called upon to quickly deliver solutions using Oracle APEX. In conjunction with hundreds of other employees, and utilizing Oracle Cloud and Oracle engineered systems, we delivered solutions in record time. I'm convinced no other organization on the planet could have achieved this. I believe it was through these efforts that Oracle fully realized what they had in APEX.  The work on these and other new systems continues, and you'll see even more Oracle Cloud solutions from Oracle developed with APEX in the future. What other low code vendor on the planet can claim that their platform is used to quickly develop SaaS solutions for literally millions of end users? 

That brings us to today. The APEX Application Development Service is a culmination of years of research, development, and real-world usage. It is a perfect confluence of technologies, of Oracle Autonomous Database, Oracle Exadata engineered systems and Oracle Cloud Infrastructure, combined with the proven Oracle APEX framework. There are problems to be fixed, features to be added, and functionality to broaden, but it's still a great achievement. Everyone on the Oracle APEX team should be very proud - they have all worked so very hard to get here.

I must also give credit to you, the Oracle APEX community, for the Oracle APEX Application Development Service. Without your support, guidance, and enthusiasm, APEX would not be where it is today. So many people in the APEX community have contributed to APEX - in immeasurable ways, both big and small. There have been a great number of bug reports, feature suggestions, marketing recommendations and requested enhancements from the community.  We've also had many meetings with our customers and partners seeking their architectural advice. Everyone has always been so helpful and gracious with their time and talent.

This has been an amazing journey.  The Oracle APEX Application Development Service is a huge milestone in that journey.  We look forward to the next 20 years and what that will bring.  And we look forward to doing it hand-in-hand with our amazing Oracle APEX community.

Check Constraints

Jonathan Lewis - Wed, 2021-01-13 09:17

This is a note I drafted in 2018 but never got around to publishing. It’s an odd quirk of behaviour that I discovered in 12.2.0.1 but I’ve just checked and it’s still present in 19.3.0.0.

Here’s a funny little thing that I found while checking some notes I had on adding constraints with minimum service interruption – a topic I last wrote about a couple of years ago {ed. now nearly 5 years ago]. This time around I did something a little different, and here’s a cut-n-paste from the first couple of steps when I had previously deleted a row from another session without committing (table t1 is a table I created as select * from all_objects).

Note that the first SQL statement uses “disable” while the second uses “enable”:


SQL> alter table t1 add constraint c1 check(owner = upper(owner)) disable novalidate;
alter table t1 add constraint c1 check(owner = upper(owner)) disable novalidate
            *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

SQL> alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate;

At this point my session was hanging – and I find it a little surprising that the attempt to create the constraint disabled returns an immediate ORA-00054, while the attempt to create it enabled waits. A quick check of v$lock showed that my session was requesting a TX enqueue in mode 4 (transaction, share mode) waiting for the other session to commit or rollback .

In the following output from 12.1.0.2 my session is SID 16 and I’ve simply reported all the rows for the two sessions from v$lock:


       SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
        16 TX     327704      12790          0          4        169          0          0
           TX      65550       9613          6          0        169          0          0
           TM     192791          0          2          0        169          0          0
           OD     192791          0          4          0        169          0          0
           AE        133          0          4          0        579          0          0

       237 TX     327704      12790          6          0        466          1          0
           TM     192791          0          3          0        466          0          0
           AE        133          0          4          0        582          0          0

You’ll notice my session is holding an OD enqieie in mode 4 and a TM lock in mode 2 – the value 192791 is the object_id of the table in question. The OD lock is described in v$lock_type as “Lock to prevent concurrent online DDLs”.

It would appear, therefore, that we are stuck until the other session commits – so I hit ctrl-C to interrupt the wait, and then tried to add the constraint again, stil without committing (or rolling back) the other session. Here’s the cut-n-paste from that sequence of events:


alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation

SQL> alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate;
alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate
                              *
ERROR at line 1:
ORA-02264: name already used by an existing constraint

I’ve interrupted the command and “cancelled” the current operation – but it seems that I have successfully added the constraint anyway!

SQL> select constraint_name, constraint_type, search_condition from user_constraints where table_name = 'T1';

CONSTRAINT_NAME      C SEARCH_CONDITION
-------------------- - --------------------------------------------------------------------------------
SYS_C0018396         C "OWNER" IS NOT NULL
SYS_C0018397         C "OBJECT_NAME" IS NOT NULL
SYS_C0018398         C "OBJECT_ID" IS NOT NULL
SYS_C0018399         C "CREATED" IS NOT NULL
SYS_C0018400         C "LAST_DDL_TIME" IS NOT NULL
SYS_C0018401         C "NAMESPACE" IS NOT NULL
C1                   C owner = upper(owner)

And this is what happened when I switched to the other session – where I had still not committed or rolled back – and tried to execute an update:


SQL> update t1 set owner = lower(owner) where owner = 'SYSTEM' and rownum = 1;
update t1 set owner = lower(owner) where owner = 'SYSTEM' and rownum = 1
*
ERROR at line 1:
ORA-02290: check constraint (TEST_USER.C1) violated

So the constraint really is present and is visible to other sessions – even though the attempt to add it hung and had to be interrupted!

I can’t think of any reason why this might cause a problem in the real world – but it is an oddity that might have echoes in other cases where it matters.

Introduction to AWS Route 53

Online Apps DBA - Wed, 2021-01-13 00:47

AWS Route 53 is one of the most popular and widely used services of Amazon Web Services. This is generally because it is highly available and reliable and flexible for customer/user to use. In this blog at k21academy.com/awssa33, we are going to cover everything that you need to understand about AWS Route 53:- 1.Overview of […]

The post Introduction to AWS Route 53 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator