Feed aggregator

Generating a random number of rows for every date within a date range

Tom Kyte - Thu, 2021-07-29 19:46
I have some working SQL below that generates a row for each employee_id. My goal is to get every date in the range via the function, which works fine standalone, then get N ( random number ( 1-10) of rows for each employee_id for every in the range specified. Once the SQLworks I intend to put this code in a procedure so I can pass it a range of dates. So we can assure we are both running the same version of Oracle I tested this on live SQL. Below is some sample output for a single day only. Please note the employee_id and location_id must exist in their corresponding tables. Since my function call always generates dates with a time of 00:00:00 I plan on eventually adding time to the access_date. <code>EMPLOYEE_ID CARD_NUM LOCATION_ID ACCESS_DATE 1 F123456 10 07302021 09:47:48 1 F123456 5 07282021 19:17:42 2 R33432 4 07282021 02:00:37 3 C765341 2 07282021 17:33:57 3 C765341 6 07282021 17:33:57 3 C765341 1 07282021 18:53:07 4 D564311 6 07282021 03:06:37 ALTER SESSION SET NLS_DATE_FORMAT = 'MMDDYYYY HH24:MI:SS'; CREATE OR REPLACE TYPE nt_date IS TABLE OF DATE; / CREATE OR REPLACE FUNCTION generate_dates_pipelined( p_from IN DATE, p_to IN DATE ) RETURN nt_date PIPELINED DETERMINISTIC IS v_start DATE := TRUNC(LEAST(p_from, p_to)); v_end DATE := TRUNC(GREATEST(p_from, p_to)); BEGIN LOOP PIPE ROW (v_start); EXIT WHEN v_start >= v_end; v_start := v_start + INTERVAL '1' DAY; END LOOP; RETURN; END generate_dates_pipelined; / Create table employees( employee_id NUMBER(6), first_name VARCHAR2(20), last_name VARCHAR2(20), card_num VARCHAR2(10), work_days VARCHAR2(7) ); ALTER TABLE employees ADD ( CONSTRAINT employees_pk PRIMARY KEY (employee_id) ); INSERT INTO employees ( EMPLOYEE_ID, first_name, last_name, card_num, work_days ) WITH names AS ( SELECT 1, 'Jane', 'Doe', 'F123456', 'NYYYYYN' FROM dual UNION ALL SELECT 2, 'Madison', 'Smith', 'R33432','NYYYYYN' FROM dual UNION ALL SELECT 3, 'Justin', 'Case', 'C765341','NYYYYYN' FROM dual UNION ALL SELECT 4, 'Mike', 'Jones', 'D564311','NYYYYYN' FROM dual ) SELECT * FROM names; CREATE TABLE locations AS SELECT level AS location_id, 'Door ' || level AS location_name, CASE round(dbms_random.value(1,3)) WHEN 1 THEN 'A' WHEN 2 THEN 'T' WHEN 3 THEN 'G' END AS location_type FROM dual CONNECT BY level <= 10; ALTER TABLE locations ADD ( CONSTRAINT locations_pk PRIMARY KEY (location_id)); SELECT e.employee_id, e.card_num, l.location_id, c.access_date, e.rn, l.rn, c.rn FROM ( SELECT employee_id, round ( dbms_random.value ( 1, 10 ) ) rn, card_num FROM employees ) e INNER JOIN ( SELECT location_id, row_number() OVER (ORDER BY dbms_random.value) AS rn FROM locations ) l ON (e.rn = l.rn) INNER JOIN ( SELECT COLUMN_VALUE AS access_date, row_number() OVER (ORDER BY dbms_random.value) AS rn FROM TABLE(generate_dates_pipelined(SYSDATE, ADD_MONTHS(SYSDATE, 1))) ) c ON (e.rn >= c.rn) ORDER BY employee_id, location_id;</code>
Categories: DBA Blogs

Joining Data in OAC

Rittman Mead Consulting - Thu, 2021-07-29 08:41

One of the new features in OAC 6.0 was Multi Table Datasets, which provides another way to join tables to create a Data Set.

We can already define joins in the RPD, use joins in OAC’s Data Flows and join Data Sets using blending in DV Projects, so I went on a little quest to compare the pros and cons of each of the methods to see if I can conclude which one works best.

What is a data join?

Data in databases is generally spread across multiple tables and it is difficult to understand what the data means without putting it together. Using data joins we can stitch the data together, making it easier to find relationships and extract the information we need. To join two tables, at least one column in each table must be the same. There are four available types of joins I’ll evaluate:

1.    Inner join - returns records that have matching values in both tables. All the other records are excluded.

2.    Left (outer) join - returns all records from the left table with the matched records from the right table.

3.    Right (outer) join - returns all records from the right table with the matched records from the left table.

4.    Full (outer) join - returns all records when there is a match in either left or right tables.

Each of the three approaches give the developer different ways and places to define the relationship (join) between the tables. Underpinning all of the approaches is SQL. Ultimately, OAC will generate a SQL query that will retrieve data from the database, so to understand joins, let’s start by looking at SQL Joins

SQL Joins

In an SQL query, a JOIN clause is used to execute this function. Here is an example:

SELECT EMP.id, EMP.name, DEPT.stream
FROM EMP
INNER JOIN DEPT ON DEPT.id = EMP.id;  
Figure 1 - Example of an inner join on a sample dataset.

Now that we understand the basic concepts, let’s look at the options available in OAC.

Option 1: RPD Joins

The RPD is where the BI Server stores its metadata. Defining joins in the RPD is done in the Admin Client Tool and is the most rigorous of the join options. Joins are defined during the development of the RPD and, as such, are subject to the software development lifecycle and are typically thoroughly tested.

End users access the data through Subject Areas, either using classic Answers and Dashboards, or DV. This means the join is automatically applied as fields are selected, giving you more control over your data and, since the RPD is not visible to end-users, avoiding them making any incorrect joins.

The main downside of defining joins in the RPD is that it’s a slower process - if your boss expects you to draw up some analyses by the end of the day, you may not make the deadline using the RPD. It takes time to organise data, make changes, then test and release the RPD.

Join Details

The Admin Client Tool allows you to define logical and physical tables, aggregate table navigation, and physical-to-logical mappings. In the physical layer you define primary and foreign keys using either the properties editor or the Physical Diagram window. Once the columns have been mapped to the logical tables, logical keys and joins need to be defined. Logical keys are generally automatically created when mapping physical key columns. Logical joins do not specify join columns, these are derived from the physical mappings.

You can change the properties of the logical join; in the Business Model Diagram you can set a driving table (which optimises how the BI Server process joins when one table is smaller than the other), the cardinality (which expresses how rows in one table are related to rows in the table to which it is joined), and the type of join.

Driving tables only activate query optimisation within the BI Server when one of the tables is much smaller than the other. When you specify a driving table, the BI Server only uses it if the query plan determines that its use will optimise query processing. In general, driving tables can be used with inner joins, and for outer joins when the driving table is the left table for a left outer join, or the right table for a right outer join. Driving tables are not used for full outer joins.

The Physical Diagram join also gives you an expression editor to manually write SQL for the join you want to perform on desired columns, introducing complexity and flexibility to customise the nature of the join. You can define complex joins, i.e. those over non-foreign key and primary key columns, using the expression editor rather than key column relationships. Complex joins are not as efficient, however, because they don’t use key column relationships.

Figure 3 - Business Model Diagram depicting joins made between three tables in the RPD.

It’s worth addressing a separate type of table available for creation in the RPD – lookup tables. Lookup tables can be added to reference both physical and logical tables, and there are several use cases for them e.g., pushing currency conversions to separate calculations. The RPD also allows you to define a logical table as being a lookup table in the common use case of making ID to description mappings.

Lookup tables can be sparse and/or dense in nature. A dense lookup tables contains translations in all languages for every record in the base table. A sparse lookup table contains translations for only some records in the base tables. They can be accessed via a logical calculation using DENSE or SPARSE lookup function calls. Lookup tables are handy as they allow you to model the lookup data within the business model; they’re typically used for lookups held in different databases to the main data set.

Multi-database joins allow you to join tables from across different databases. Even though the BI Server can optimise the performance of multi-database joins, these joins are significantly slower than those within a single database.

Option 2: Data Flow Joins

Data Flows provide a way to transform data in DV. The data flow editor gives us a canvas where we can add steps to transform columns, merge, join or filter data, plot forecasts or apply various models on our datasets.

When it comes to joining datasets, you start by adding two or more datasets. If they have one or more matching columns DV automatically detects this and joins them; otherwise, you have to manually add a ‘Join’ step and provided the columns’ data types match, a join is created.

A join in a data flow is only possible between two datasets, so if you wanted to join a third dataset you have to create a join between the output of the first and second tables and the third, and so on. You can give your join a name and description which would help keep track if there are more than two datasets involved. You can then view and edit the join properties via these nodes created against each dataset. DV gives you the standard four types of joins (Fig. 1), but they are worded differently; you can set four possible combinations for each input node by toggling between ‘All rows’ and ‘Matching rows’. That means:

Join type

Node 1

Node 2

Inner join

Matching rows

Matching rows

Left join

All rows

Matching rows

Right join

Matching rows

All rows

Full join

All rows

All rows

The table above explains which type of join can be achieved by toggling between the two drop-down options for each dataset in a data flow join.

Figure 4 - Data flow editor where joins can be made and edited between two datasets.

It’s worth mentioning there is also an operator called ‘Union Rows’. You can concatenate two datasets, provided they have the same number of columns with compatible datatypes. There are a number of options to decide how you want to combine the rows of the datasets.

Figure 5 - Union Rows operator displaying how the choices made impact columns in the data.

One advantage of data flows is they allow you to materialise the data i.e. save it to disk or a database table. If your join query takes 30 minutes to run, you can schedule it to run overnight and then reports can query the resultant dataset.

However, there are limited options as to the complexity of joins you can create:

  1. the absence of an expression editor to define complex joins
  2. you cannot join more than two datasets at a time.

You can schedule data flows which would allow you to materialise the data overnight ready for when users want to query the data at the start of the day.

Data Flows can be developed and executed on the fly, unlike the longer development lifecycle of the RPD.

It should be noted that Data Flows cannot be shared. The only way around this is to export the Data Flow and have the other user import and execute it. The other user will need to be able to access the underlying Data Sets.

Option 3: Data Blending

Before looking at the new OAC feature, there is a method already present for cross-database joins which is blending data.

Given at least two data sources, for example, a database connection and an excel spreadsheet from your computer, you can create a Project with one dataset and add the other Data Set under the Visualise tab. The system tries to find matches for the data that’s added based on common column names and compatible data types. Upon navigating back to the Data tab, you can also manually blend the datasets by selecting matching columns from each dataset. However, there is no ability to edit any other join properties.

Figure 6 - Existing method of blending data by matching columns from each dataset.Option 4: Multi Table Datasets

Lastly, let’s look at the newly added feature of OAC 6.0: Multi Table Datasets. Oracle have now made it possible to join several tables to create a new Data Set in DV.

Historically you could create Data Sets from a database connection or upload files from your computer. You can now create a new Data Set and add multiple tables from the same database connection. Oracle has published a list of compatible data sources.

Figure 7 - Data Set editor where joins can be made between numerous datasets and their properties edited.

Once you add your tables DV will automatically populate joins, if possible, on common column names and compatible data types.

The process works similarly to how joins are defined in Data Flows; a pop-up window displays editable properties of the join with the same complexity - the options to change type, columns to match, the operator type relating them, and add join conditions.

Figure 8 - Edit window for editing join properties.

The data preview refreshes upon changes made in the joins, making it easy to see the impact of joins as they are made.

Unlike in the RPD, you do not have to create aliases of tables in order to use them for multiple purposes; you can directly import tables from a database connection, create joins and save this Multi Table Dataset separately to then use it further in a project, for example. So, the original tables you imported will retain their original properties.

If you need to write complex queries you can use the Manual SQL Query editor to create a Data Set, but you can only use the ‘JOIN’ clause.

Figure 9 - Manual Query editor option under a set up database connection.So, what’s the verdict?

Well, after experimenting with each type of joining method and talking to colleagues with experience, the verdict is: it depends on the use case.

There really is no right or wrong method of joining datasets and each approach has its pros and cons, but I think what matters is evaluating which one would be more advantageous for the particular use case at hand.

Using the RPD is a safer and more robust option, you have control over the data from start to end, so you can reduce the chance of incorrect joins. However, it is considerably slower and make not be feasible if users demand quick results. In this case, using one of the options in DV may seem more beneficial.

You could either use Data Flows, either scheduled or run manually, or Multi Table Datasets. Both approaches have less scope for making complex joins than the RPD. You can only join two Data Sets at a time in the traditional data flow, and you need a workaround in DV to join data across database connections and computer-uploaded files; so if time and efficiency is of essence, these can be a disadvantage.

l would say it’s about striking a balance between turnaround time and quality - of course both good data analysis in good time is desirable, but when it comes to joining datasets in these platforms it will be worth evaluating how the use case will benefit from either end of the spectrum.

Categories: BI & Warehousing

Stats_mode function - Deterministic or Non- Deterministic

Tom Kyte - Thu, 2021-07-29 01:26
Stats_mode function - Deterministic or Non- Deterministic? What does Oracle return when there are multiple keys with same mode (highest) and how?
Categories: DBA Blogs

using explain plan

Tom Kyte - Wed, 2021-07-28 07:06
How should I use the EXPLAIN Plan for tuning of my SQL Statements .Kindly advice me on the sequence of steps to be followed . Some live examples could also be very helpful. With Regards. Ramesh.S
Categories: DBA Blogs

Oracle TDE - AES encryption mode of operation

Tom Kyte - Wed, 2021-07-28 07:06
Product: Oracle Database 19c Transparent Data Encryption (TDE) From the Chapter 10 of Advanced Security Guide, we know for the supported block ciphers "table keys are used in cipher block chaining (CBC) operating mode, and the tablespace keys are used in cipher feedback (CFB) operating mode." https://docs.oracle.com/en/database/oracle/oracle-database/19/asoag/frequently-asked-questions-about-transparent-data-encryption.html Question 1: Both modes of operation require an Initialization Vector to be specified however TDE does not allow the DBA to specify an IV. What IV does TDE actually use? Is it psuedorandom or a fixed value such as all zeros? Question 2: If the IV is fixed, it would leak information, e.g. for CBC mode it makes it deterministic, so the same plaintext always maps to the same ciphertext. So, it is possible to enhance TDE to allow an IV to be specified in the same way that DBMS_CRYPTO currently does? Thanks
Categories: DBA Blogs

how to transfer client files to DB server using PL/SQL

Tom Kyte - Mon, 2021-07-26 18:26
Hi Tom, Is there a way to transfer a file from client machine to DB server using PL/SQL? If this is possible, can you show me an example? Thank you very much. Amy
Categories: DBA Blogs

Google Cloud Pricing Calculation

Online Apps DBA - Mon, 2021-07-26 06:08

➤ Google Cloud stands out among cloud providers because of its cutting-edge tools and services. In 2021, Gartner named Google Cloud the top in the IaaS Magic Quadrant. ➤ Google Cloud is a collection of Google’s cloud computing services. The platform offers a variety of services, including computing, storage, networking, Big Data, and others, all […]

The post Google Cloud Pricing Calculation appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[MLS-C01] AWS Certified Machine Learning – Specialty QnA Day 9 & 10 Live Session FAQs.

Online Apps DBA - Mon, 2021-07-26 05:18

The blog post – https://k21academy.com/awsmlday9-10 will cover the Q/A’s from Day 9 & 10 of AWS Certified Machine Learning – Specialty in which we have covered Module 9: SageMaker Built-in Algorithms, Module 10: Model Training & Tuning, and Module 11: Model Deployment FAQs. This blog will help you to get started with AWS Certified Machine […]

The post [MLS-C01] AWS Certified Machine Learning – Specialty QnA Day 9 & 10 Live Session FAQs. appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Hugging Face Course and Pretrained Model Fine-Tuning

Andrejus Baranovski - Mon, 2021-07-26 02:52
Hugging Face team recently released an online course about transformers, pretrained model fine-tuning, and sharing models on the Hugging Face hub. I went through the first part of the course related to model fine-tuning. I explain what changes I did for my previous sample related to Hugging Face model fine-tuning, based on knowledge learned from this course.

 

Microsoft Data Analyst Associate [DA-100] Training | Day 2 Q/A Review

Online Apps DBA - Mon, 2021-07-26 00:29

This blog post will give a quick insight into our Microsoft Data Analyst Associate [DA-100] Class (Day 2) The DA-100 Certification enables Data Analysts to attain a good quality in their data assets with the assistance of Microsoft Power BI On our Day 2 Live Session, we have covered Module 3: Clean, Transform and Load […]

The post Microsoft Data Analyst Associate [DA-100] Training | Day 2 Q/A Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How Digital Marketing Can Help Brand Building

OraQA - Sun, 2021-07-25 11:07

The internet continues to evolve and is the best tool for connecting with your consumers. Most people today search the internet for products and services they need and make purchases online. If you have a small business, then you recognize the importance of the brand building. Digital marketing is a top component of brand building. The digital marketing agency Denver can help you create a digital marketing strategy to take your brand to the next level. If you’re still curious, here’s how digital marketing can help brand building.

Visibility and Reach

One of the top features of a solid digital marketing strategy devised by the leading digital marketing agency Denver is its extensive reach and visibility. Digital marketing can help create brand awareness. The more people see your brand, it will build brand recognition. When they recognize your brand, it increases the chances of potential customers picking your brand. You can spend much less on brand awareness with a strong digital marketing plan.

Increases Credibility

Another way digital marketing helps brand building is by increasing the credibility of your brand. People tend to trust brands that they can easily recognize. When your brand has been around for a long time, it will have a customer base. You should also be able to deliver top-quality products and services to your audience to build trust. With the right digital marketing strategy by the top digital marketing agency Denver, your business will not only be recognizable but well-reputable. As the credibility of your brand grows, so does loyalty to your brand.

Helps to Build a Community

Most of the time, everyone thinks that community building is for social media marketers only. However, every brand needs a community. A community requires engagement from different people. Digital marketing techniques help to build a loyal brand community. This community makes releasing new products and services much easier because you don’t have to invest so much awareness. When you already have an engaging community waiting for your products and services, paid advertising would be less necessary.

Improves Brand Equity

Brand equity is the commercial value you get from how your consumers feel about your brand. It determines how a brand commands profits and market shares in the industry. When you have higher brand equity, your business will notice an increase in its traffic. Digital marketing will help brand building and therefore lead to higher brand equity. This feature will make consumers favor your brand over competitors and increase your profits.

The post How Digital Marketing Can Help Brand Building appeared first on ORA QA.

Help, Oracle’s yanking my Chain (count)

The Anti-Kyte - Sun, 2021-07-25 09:32

Despite all the hype, it turns out that Football was coming home only to change it’s underwear before going out again.
As the Azzuri may have said, “Veni Vidi Vici (ai penalti)”…or something like that.
Look, I’m over it, there are far more important things in life than football and it will not impinge further on this post. Well, not much.

What I’m going to explore here how you persuade Oracle to report the count of chained rows for a table in the _tables dictionary views. The examples that follow were run on Oracle 19c Enterprise Edition. I have observed the same behaviour all the way back to 11gR2.

What is a Chained Row ?

In Oracle, a chained row is one that is too big to fit into a single block.
Usually this happens because a row that fits within a single block is updated and becomes too big to fit into a block ( row migration), or the row is simply too big for a single block to start with.
Either way, reading such rows require additional blocks to be retrieved, which may impact on query performance.
There are other ways row chaining can occur but what we’re really interested in here is exactly how to identify which tables contain chained rows ( and how many).

A Tables with Chained Rows

Time for a simple example involving a table where the rows are created being too large to fit into a single block.
First, let’s check the block size of the tablespace we’re creating the table in :

select ts.tablespace_name, ts.block_size
from dba_tablespaces ts
inner join user_users usr
    on usr.default_tablespace = ts.tablespace_name
/    
TABLESPACE_NAME                BLOCK_SIZE
------------------------------ ----------
USERS                                8192

Now let’s create a table which includes rows larger than 8192 bytes ( 8K) :

create table years_of_hurt as
    -- Winners of all the major tournaments England haven't won since 1966
    select 
        'ITALY' as team, 
        2 as euro_wins, 
        2 as world_cup_wins,
        rpad('Forza', 4000, '.') as text1,
        rpad('Italia',4000,' ') as text2,
        rpad('!', 4000, ' ') as text3
    from dual 
    union all
    select 'BRAZIL', null, 3, null, null, null from dual union all
    select 'GERMANY', 3, 2, rpad('Die mannschaft', 4000, '.'), rpad('Never bet against them', 4000, ' '), rpad('!', 4000, ' ') from dual union all
    select 'CZECH REPUBLIC', 1, null, null, null, null from dual union all
    select 'ARGENTINA', null, 3, null, null, null from dual union all
    select 'FRANCE', 2, 2, rpad( 'Allez', 4000, '.'), rpad('Les', 4000, ' '), rpad('Bleus!', 4000, ' ') from dual union all
    select 'DENMARK', 1, null, 'Which way to the beach ?', null, null from dual union all
    select 'GREECE',  1, null, null, null, null from dual union all
    select 'SPAIN', 2, 1, rpad('tiki', 4000, '.'), rpad('taka', 4000, ' '), rpad('!', 4000, ' ') from dual union all
    select 'PORTUGAL', 1, null, null, null, null from dual
/    

We can see that there are some records which are larger than the tablespace block size…

select team, 
    nvl(length(team||to_char(euro_wins)||to_char(world_cup_wins)
        ||text1||text2||text3), 0) as record_length
from years_of_hurt
order by 2 desc
/

TEAM           RECORD_LENGTH
-------------- -------------
GERMANY                12009
FRANCE                 12008
ITALY                  12007
SPAIN                  12007
DENMARK                   32
CZECH REPUBLIC            15
ARGENTINA                 10
PORTUGAL                   9
GREECE                     7
BRAZIL                     7

10 rows selected. 
DBMS_STATS

I mean, this is going to be a really short post, right ? If we want a stat like the chained row count we just need to run DBMS_STATS.GATHER_TABLE_STATS, don’t we ?

I mean, there’s even a column called CHAIN_CNT in the _tables views.
If we look at the column comments, we can confirm it’s purpose :

select comments
from all_col_comments
where owner = 'SYS'
and table_name = 'USER_TABLES'
and column_name = 'CHAIN_CNT'
/

COMMENTS                                                                        
-------------------------------------------
The number of chained rows in the table

So, when we gather stats on the table…

exec dbms_stats.gather_table_stats( user, 'years_of_hurt');

…we should see this reflected in the _tables dictionary views…

select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN LAST_ANALYZ SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- ----------- -----------
        10         10            0          0        4818 24-JUL-2021          10

Hmmm, it seems that DBMS_STATS is convinced that there are no chained rows at all in our table, despite sampling all 10 rows.

The ANALYZE command

Now, I’m old enough to remember a time when England were really bad at football and Italy…were still pretty good at football. This is back in the days before DBMS_STATS came along.
Then, in order to gather stats on a table, you had to run the ANALYZE command.

Looks like this command is still around and one of it’s purposes is to identify chained rows…

The CHAINED_ROWS table

We’ll come onto running ANALYZE to identify the individual chained rows in a table shortly. Before that, we need to ensure that there is a table to hold the results of this command.

Oracle provide a script for this purpose ( to be run as SYS as SYSDBA) :

$ORACLE_HOME/rdbms/admin/utlchain.sql

In order to access the table, I additionally ran :

grant all on chained_rows to mike;

You can point ANALYZE TABLE at a custom table of your own devising as long as it has the same structure as the CHAINED_ROWS table created by this script – e.g. :

create table local_chained_rows (
    owner_name varchar2(128),
    table_name varchar2(128),
    cluster_name varchar2(128),
    partition_name varchar2(128),
    subpartition_name varchar2(128),
    head_rowid rowid,
    analyze_timestamp date)
/

Either way, you can now analyze the table and write the chained row details to whichever target table you’re using. In this case, we’ll use the default :

analyze table years_of_hurt list chained rows into sys.chained_rows;
Table YEARS_OF_HURT analyzed.

I can now see that, as expected, there are four chained rows in the table :

select head_rowid, analyze_timestamp
from sys.chained_rows
where owner_name = 'MIKE'
and table_name = 'YEARS_OF_HURT'
/
HEAD_ROWID         ANALYZE_TIM
------------------ -----------
AAATiyAAMAAAAHrAAA 24-JUL-2021
AAATiyAAMAAAAHsAAC 24-JUL-2021
AAATiyAAMAAAAHuAAD 24-JUL-2021
AAATiyAAMAAAAHwAAD 24-JUL-2021

Not that using ANALYZE in this way does not change the stats recorded in the _TABLES dictionary views :

select chain_cnt
from user_tables
where table_name = 'YEARS_OF_HURT'
/
 CHAIN_CNT
----------
         0
Old-School Analysis

“You’ll win nothing with kids !”

Also…if the continued lack of an accurate chain count in USER_TABLES has left you feeling as sick as a parrot, you can always analyze your table just like Alan Hansen used to do…

analyze table years_of_hurt estimate statistics;

Finally, we can see that the stats in USER_TABLES have been updated :

select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- -----------
        10         10            6          4        4829          10

It’s worth noting that, even in our restricted column selection, the EMPTY_BLOCKS and AVG_ROW_LEN values have changed in addition to CHAIN_CNT.
If you do decide you need to use analyze in this way, it may be worth re-executing DBMS_STATS afterwards to ensure the stats used by the CBO are accurate. Note that doing so will not overwrite the CHAIN_CNT value :

exec dbms_stats.gather_table_stats(user, 'years_of_hurt');


select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- -----------
        10         10            6          4        4818          10

For the England Mens’ team, the next major tournament ( provided they qualify) is only next year, in 2022. That’s 56 years of schadenfreude if you’re using the Scottish Calendar.

Thinking Of Buying An Affordable Cooling System? 

OraQA - Sun, 2021-07-25 08:43

We all look out for comfort which is why we spend money acquiring it in the different format in which it presents itself. Whether it be to buy new clothes, shoes, or gadgets that will make living easy, we do not mind if it screams the word, comfort. You need to buy a new cooling system for home use or office space that is not too expensive and will not put you in debt, you should consider buying an evaporative air cooler. This cooling system boasts of air that cools the environment, leaving it dust-free and fresh smelling. You do not have to worry about the cost of the system because it is very affordable and works better than you can imagine. 

Finding the right evaporative air cooler that fits into your workspace should not be too difficult. You can pick from any of the different models, types, and sizes of anyone you like depending on the size of your office or home space. When you compare the price of this to a standard Air Conditioner, you discover that you are spending far less to keep cool. Something that the air conditioner does effortlessly. The evaporative coolers are so effective that they cool the air over 20 degrees while ensuring that the stake air is replaced with cool, fresh air. You can get any size or type for your garage or outdoor use. 

While working in your shop as a mechanic, this could be your closest buddy that will ensure that you do not get tired easily due to the heat of the vehicle’s engine or the hot weather condition. You need not worry about needing space to place your cooling system as it is usually small compared to an air conditioner and easier to move around. You can save more when you buy the portable one that is big enough to cover your workspace and has wheels that make it easier to move from one location to another. You could also buy two evaporative air cooler with the amount you would use in purchasing an air conditioner. Imagine that you need to buy two cooling systems for home use and for your office space but do not know what to do because the amount with you will not cover two air conditioning systems but one. 

You do not need to worry because you can get really cool air at the best quality when you purchase a swamp cooler for use. Providing cool air through the evaporator is its specialty and you can be sure of enjoying your purchase. 

 

The post Thinking Of Buying An Affordable Cooling System?  appeared first on ORA QA.

Opening a Lagging Standby Database (to verify data ?)

Hemant K Chitale - Sat, 2021-07-24 06:04

 As shown in my previous blog post, you can create a Standby Database that lags the Primary by not applying Redo immediately but "waiting" for a specified interval.  It continues to receive and  ArchiveLogs but simply applies each only after the "wait interval".


So, first, the status at the Primary:

oracle19c>echo $ORACLE_SID
ORCLCDB
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 24 18:03:12 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select systimestamp, database_role, current_scn from v$database;

SYSTIMESTAMP DATABASE_ROLE CURRENT_SCN
-------------------------------------------------------------- ---------------- -----------
24-JUL-21 06.03.32.106863 PM +08:00 PRIMARY 13258062

SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> select * from hemant.job_tracking_tbl order by 1;

JOB_END_TIME JOB_ID DB_SCN
-------------------------------------------------------------- ---------- ----------
24-JUL-21 04.57.21.676949 PM 1 13239134
24-JUL-21 05.04.29.870877 PM 2 13241261
24-JUL-21 05.30.17.962275 PM 3 13246616
24-JUL-21 05.39.10.912969 PM 4 13247859
24-JUL-21 05.40.20.865467 PM 5 13248159
24-JUL-21 05.50.23.930352 PM 6 13252182
24-JUL-21 06.00.27.037797 PM 7 13257658

7 rows selected.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /opt/oracle/archivelog/ORCLCDB
Oldest online log sequence 310
Next log sequence to archive 312
Current log sequence 312
SQL>


Now, the status at STDB2 (the Standby that is lagging with an enforced 60minutes delay

From the alert.log :
2021-07-24T17:51:15.716720+08:00
PR00 (PID:2924): Media Recovery Log /opt/oracle/archivelog/STDB21_303_1036108814.dbf
PR00 (PID:2924): Media Recovery Delayed for 59 minute(s) T-1.S-304
2021-07-24T17:57:26.299295+08:00
PR00 (PID:2924): Media Recovery Log /opt/oracle/archivelog/STDB21_304_1036108814.dbf
2021-07-24T17:57:44.580258+08:00
PR00 (PID:2924): Media Recovery Delayed for 60 minute(s) T-1.S-305
2021-07-24T18:00:32.550708+08:00
rfs (PID:3452): Archived Log entry 52 added for B-1036108814.T-1.S-311 ID 0xa7521ccd LAD:3
2021-07-24T18:00:33.444329+08:00
rfs (PID:3452): Selected LNO:4 for T-1.S-312 dbid 2778483057 branch 1036108814


oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 24 18:05:53 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database open read only;

Database altered.

SQL> select systimestamp, database_role, open_mode, current_scn from v$database;

SYSTIMESTAMP DATABASE_ROLE OPEN_MODE CURRENT_SCN
--------------------------------------------------- ---------------- ------------ -----------
24-JUL-21 06.06.51.313616 PM +08:00 PHYSICAL STANDBY READ ONLY 13239390

SQL>
SQL> alter pluggable database orclpdb1 open read only;

Pluggable database altered.

SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> select * from hemant.job_tracking_tbl order by 1;

CURRENT_TIMESTAMP JOB_ID CURRENT_SCN
-------------------------------------------------------------- ---------- -----------
24-JUL-21 04.57.21.676949 PM 1 13239134

SQL>


The Primary database is at Log Sequence#312. This Standby has applied only Sequence#304. Let me resume Recovery for some more time and then check the Standby again.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>echo $ORACLE_SID
STDB2
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 24 18:11:12 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 318767104 bytes
Database Buffers 872415232 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> alter database recover managed standby database using archived logfile disconnect from session;

Database altered.

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>


Later ... from the STDB2 alert log :

2021-07-24T18:40:22.531574+08:00
PR00 (PID:29990): Media Recovery Log /opt/oracle/archivelog/STDB21_309_1036108814.dbf
PR00 (PID:29990): Media Recovery Delayed for 59 minute(s) T-1.S-310
2021-07-24T18:40:43.574486+08:00
rfs (PID:531): No SRLs available for T-1
2021-07-24T18:40:43.743506+08:00
rfs (PID:531): Opened log for T-1.S-317 dbid 2778483057 branch 1036108814
2021-07-24T18:40:43.762715+08:00
ARC3 (PID:29836): Archived Log entry 57 added for T-1.S-316 ID 0xa7521ccd LAD:1
2021-07-24T18:40:43.762785+08:00
ARC3 (PID:29836): Archive log for T-1.S-316 available in 60 minute(s)
2021-07-24T18:49:27.636427+08:00
PR00 (PID:29990): Media Recovery Log /opt/oracle/archivelog/STDB21_310_1036108814.dbf
PR00 (PID:29990): Media Recovery Delayed for 60 minute(s) T-1.S-311
2021-07-24T18:50:45.257290+08:00
rfs (PID:531): Archived Log entry 58 added for B-1036108814.T-1.S-317 ID 0xa7521ccd LAD:3
2021-07-24T18:50:46.045279+08:00
rfs (PID:531): Selected LNO:4 for T-1.S-318 dbid 2778483057 branch 1036108814


oracle19c>echo $ORACLE_SID
STDB2
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 24 18:51:27 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database open read only;

Database altered.

SQL> alter pluggable database orclpdb1 open read only;

Pluggable database altered.

SQL> alter session set container=orclpdb1;

Session altered.

SQL>
SQL> select * from hemant.job_tracking_tbl order by 1;

JOB_END_TIME JOB_ID DB_SCN
-------------------------------------------------------------- ---------- ----------
24-JUL-21 04.57.21.676949 PM 1 13239134
24-JUL-21 05.04.29.870877 PM 2 13241261
24-JUL-21 05.30.17.962275 PM 3 13246616
24-JUL-21 05.39.10.912969 PM 4 13247859
24-JUL-21 05.40.20.865467 PM 5 13248159
24-JUL-21 05.50.23.930352 PM 6 13252182

6 rows selected.

SQL>
SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
24-JUL-21 06.53.31.159094 PM +08:00

SQL>


So, now at 18:47, STDB2 has applied Sequence#310 and the database now shows data that came through that ArchiveLog. Upto JOB_ID=6, JOB_END_TIME=05:50:23pm
The Primary has already progressed further.

SQL> l
1* select * from hemant.job_tracking_tbl order by 1
SQL> /

JOB_END_TIME JOB_ID DB_SCN
-------------------------------------------------------------- ---------- ----------
24-JUL-21 04.57.21.676949 PM 1 13239134
24-JUL-21 05.04.29.870877 PM 2 13241261
24-JUL-21 05.30.17.962275 PM 3 13246616
24-JUL-21 05.39.10.912969 PM 4 13247859
24-JUL-21 05.40.20.865467 PM 5 13248159
24-JUL-21 05.50.23.930352 PM 6 13252182
24-JUL-21 06.00.27.037797 PM 7 13257658
24-JUL-21 06.10.33.163203 PM 8 13259223
24-JUL-21 06.20.36.839944 PM 9 13261275
24-JUL-21 06.22.46.972310 PM 10 13261560
24-JUL-21 06.30.39.787880 PM 11 13262799
24-JUL-21 06.37.18.623659 PM 12 13263658
24-JUL-21 06.40.41.713016 PM 13 13264263
24-JUL-21 06.50.43.755835 PM 14 13265798

14 rows selected.

SQL>


So, the operative methods at the Standby are :
 For Recovery :
1.  alter database recover managed standby database USING ARCHIVED LOGFILE disconnect from session

To Open and Query :
1. alter database recover managed standby database CANCEL
2. alter database OPEN READ ONLY
3. alter pluggable database <pdbname>   OPEN READ ONLY

To resume Recovery :
1. shutdown immediate
2. startup mount
3. alter database recover managed standby database USING ARCHIVED LOGFILE disconnect from session

While the Primary must specify a DELAY value in the log_archive_dest_n parameter for this destination Standby




Categories: DBA Blogs

Security steps to protect a database

Tom Kyte - Fri, 2021-07-23 17:06
Which are the important security steps to protect a database whose application will be in internet? Which are the differences between Connection Manager and Database Firewall? If I have native network encryption configured do I need to do something on application server level? I mean, for example, configuring the thin jdbc client network?
Categories: DBA Blogs

how to use dbms_stats.gather_databse_stats

Tom Kyte - Fri, 2021-07-23 17:06
Tom: i try to use dbms_stats to give me a report of what statistics is missing, what i do is: declare a dbms_stats.objecttab; begin dbms_stats.gather_database_stats(OPTIONS=>'LIST EMPTY',OBJLIST=>a); end; after that how can i know the content of a?
Categories: DBA Blogs

Getting Value from JSON array using PL/SQL

Tom Kyte - Fri, 2021-07-23 17:06
I need help. I do I get data from the stats object in the JSON array using PL/SQL? <code>{ "items": [ { " stats": { "m_date": "2019-05-31T00:00:00", "v_num": "0040012", "pk_num": "0562", "amt": 94, "bal": 75, "disc": 13 } } }</code> Thanks
Categories: DBA Blogs

Invalid XML character Error - How to find the invalid character from a VARCHAR2 database column?

Tom Kyte - Fri, 2021-07-23 17:06
Hello, Oracle newbie here. I am getting this error "Character reference "&#56256" is an invalid XML character" for XML data that is printed onto a report. The XML data that is causing the issue is from a VARCHAR2 data column in the database. I have filtered out the column to a separate backup table I have created and I want to go through the records in order to find where this invalid character is in. Then I need to write an update statement replacing the invalid character with a valid one. I am not sure how to do this. Is there a regex I can write with a SELECT statement for this? I tried below, but they didn't bring up any results: <code>select * from tabname where instr(colname,chr(56256)) > 0; </code> <code>select * from tabname where colname like unistr('%\dbc0%'); </code> Glad if someone one can help. Thank you!
Categories: DBA Blogs

Usage Flashback Query with Views

Tom Kyte - Fri, 2021-07-23 17:06
Dear TOM! Is it save to use Flashback Query with views? <b>Setup:</b> <code>CREATE VIEW my_view AS SELECT a.*, b.* FROM table_a a JOIN table_b b ON (a.ID = b.ID_A);</code> <b>Flashback Query at view level:</b> <code>SELECT v.* FROM my_view AS OF TIMESTAMP systimestamp - INTERVAL '30' SECOND v; </code> <b>Flashback Query at table level:</b> <code>SELECT a.*, b.* FROM table_a AS OF TIMESTAMP systimestamp - INTERVAL '30' SECONDa JOIN table_b AS OF TIMESTAMP systimestamp - INTERVAL '30' SECONDb ON (a.ID = b.ID_A);</code> <b>Will both queries <u>reliably</u> deliver the same result?</b> <b>Will the result of both queries be <u>consistent</u> over all affected tables?</b> Thank you, Matthias
Categories: DBA Blogs

Help with trying to decide with authentication approach should be setup

Tom Kyte - Fri, 2021-07-23 17:06
Please help me keep my sanity by pointing me in the right direction when deciding the authentication approach to use with Oracle 19c databases. This is a very confusing topic since it deals with a few areas that require experience with Microsoft Active Directory or other nonRDBMS software. Also, each one of them isn't simple to use and Oracle training didn't go into implementing each of these when I attended in early 2000. This is driving me insane especially since I thought that database authentication was already secure enough. Here are all of the Oracle authentication methods that I know exist: <code>- Oracle database authentication ( create user identified by password ) - Operating System authentication ( create OPS$user identified externally ) - Kerberos / Radius (create user identified externally as 'kerberos_name' ) - certificate_DN ( create user identified externally as 'certificat_DN' ) (is this SSL authentication?) - Globally as 'directory_DN' ( create user identified GLOBALLY as 'directory_DN') ( Sigh ... this sounds so much like other authentication options.)</code> I'm not sure if these are authentication approaches, but I know they mingle with authentication and add to the confusion: - Centrally Managed Users - Enterprise User Security Also, knowning when Microsoft Active Directory can be used is confusing. I think these require Microsoft Active Directory: - Kerberos - Centrally Managed Users To muddy the water more, based on what I have seen, Kerberos can be used with Centrally Managed Users which is confusing since it seems like Kerberos with AD is enough. Finally, I keep seeing that Oracle Internet Directory is needed in some cases. The only one that seems to need is "Enterprise User Security" which seems like if we have Microsoft Active Directory, we would use "Centrally Managed Users" setup. I know i've mentioned a lot above. It would be nice if you can at a minimum tell me which one I should focus on to setup a secure authentication approach without going overboard. Which approach would recommend to use for the most secure authentication with the following in our infrastructure: <code>- Enterprise Edition Oracle 19c on Linux with April 2021 RU applied - SQLNET.TCP.INVITED_NODES - FAILED_LOGIN_ATTEMPTS=3 - orc12c_verify_function - We don't allow use of password file - Limit access through Oracle "grants" - We have changed all default passwords - We use profiles to expire passwords regularly - Microsoft active directory which we aren't using. - We use CA signed SSL certificates with strong encryption algorithms with FIPS-140-2 configured between database server and clients so we could use "Authentication with Public Key Infrastructure". - Our databases are only accessed through the applications not by individual users</code> Why isn't the above good enough? The only thing we aren't using is Microsoft Active directory or SSL Client Authentication. I thought that having Oracle database authentication with a complex password with the use of CA signed certificates would be a secure authentication approach. Why would Oracle feel the need to add more authentication approaches and confuse most of us? With this approach, a client needs to know the password. A client needs to have been given the CA signed certificate in order to be allowed to connect to the database. A client is forced to use a complex password, is only given limited password attempts with FAILED_LOGIN_ATTEMPTS=3, Finally, we have TCP.INVITED_NODES setup so only those clients with IPs in that list are allowed to connect. Geezzz, why is more needed? Thanks for your help, John
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator