DBA Blogs
I have a materialised view in my own schema which based on a table in another schema. However I want to modify the matrealise view to add a where clause which will never becomes true e.g. Where 1=2
Can i alter an existing materialised view to add this where clause in the MView query . Will alter command works or I have to drop and recreate the Mview (Which i am rluctatant to do actually)
Hi,
I have a table that holds about 2 million records. No column in the table can be used to divide rows equally. Initially when the data is loaded into the table, the bucket_number column is null. After data loaded, I would like to calculate and equally divide the total no. of records and update bucket_number column with number. Later the buckets will be processed parallelly by another batch process. I could use ntile to bucketize data but I want to persist the bucket number in table.
What is the best way to bucketize and update column in Oracle?
Dear Tom,
how can I create bind variable in Oracle Live SQL,
<code><variable variable_name data_type;>
example <variable dusko number;> </code>
just doesn't work, i get "Ussuported command" message...
Thanks and best regards
ORA-04063: package body "SYS.DBMS_NETWORK_ACL_ADMIN" has errors,
how do I reinstall this package?
I need to CREATE some test data where I want to CREATE N number of timestamps between 2 times for every 15 minute interval including the start time and less than the end time.
N would be defined as dbms_random between a low_number and a high_number.
I want to run something like this get_dates TIMESTAMP'2023-01-25 03:00:00', TIMESTAMP'2023-01-27 13:30:00', 10, 25
Any help would be greatly appreciated.
Hello,
Thanks to this forum a lot of my answers were answered.
I noticed that most of the queries in an application that am working on is using the MATERIALIZE hint in the subqueries.
Our queries heavily use CTEs and relies on multiple joins.
Our query performance is not great.
In OEM, i noticed all these queries have high user I/O and am guessing, its because all the queries that query tables of 100s million rows are materialized.
Is that true? Is it better to keep it there rather than remove it?
When should i REALLY be using materialized hint?
The queries also use
"where column1 = CASE WHEN this THEN 1 WHEN THAT THEN 2 WHEN OTHER_THAT THEN 3"
Also uses
Select NVL(this, NVL(that,99)), column, column from table;
Does the above impact performance? If so how can i improve it?
I also noticed use of "use_concat materialize" hint. For some of the queries, when i execute it without "use_concat" the speed is almost 300% faster.
Under what condition should this hint be used?
What would help with performance
We currently use triggers on a subset of tables in our transactional database (on-premises) to capture row / column level changes and insert them into a separate table. With the availability of DBMS_CLOUD package for our product version we are exploring the option to rewrite some of the existing triggers and push the row / column changes to a native queueing service on the cloud instead of writing to another table in the database. The idea is to then have an asynchronous process to subscribe to the queue and push the data to a downstream data store.
The developer blog below has some very relevant information but it is written for Autonomous database & Oracle cloud.
https://blogs.oracle.com/developers/post/back-to-the-database-part-3-publishing-database-changes-to-a-stream
Has anyone had success with using DBMS_CLOUD package on 19c to work with streaming / queuing services on Azure / AWS cloud? On the documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_CLOUD.html), I see that it can interact with an object store but unlike the documentation for autonomous database there is no mention of REST_API capabilities. Is that an undocumented feature for 19c? I am looking for some clarity before embarking on a proof of concept. Thank you.
I have a Docker Container with Oracle Database Entrepise Edition (followed the basic instructions on container-regitry.oracle.com)
And i'm trying to create a NodeJs application to connect to the Oracle Database inside the container (tried the "how to basics" on node-oracledb.readthedocs.io), but without success.
My connection object is the following (using same credentials and connectString i'm able to connect on SQL Developer):
<code> {
user: 'SYS',
password: 'Paulo@oracle123',
connectString: 'localhost/OracleSID '
}
</code>
I've been receiving the following error
<code>Error: DPI-1047: Cannot locate a 64-bit Oracle Client library: "The specified module could not be found". See https://oracle.github.io/node-oracledb/INSTALL.html for help
Node-oracledb installation instructions: https://oracle.github.io/node-oracledb/INSTALL.html
You must have 64-bit Oracle Client libraries in your PATH environment variable.
If you do not have Oracle Database on this computer, then install the Instant Client Basic or Basic Light package from
https://www.oracle.com/database/technologies/instant-client/winx64-64-downloads.html
A Microsoft Visual Studio Redistributable suitable for your Oracle client library version must be available.
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async connectToDb (D:\PersonalDev\Migrate Oracle to MSSQL\index.js:10:18) {
errorNum: 0,
offset: 0
}
</code>
But coudn't figure it out.
Please HELP!!
We have an old query that was running fine (though it could use improvements in general) but after having a VPD policy put in place, it started to run long and not complete. The difference between an 1 - 1.5 hours to not completing after running for well over 48 hours.
After digging into the VPD policy in place, in this scenario of an application ID, they did not want to apply the policy so the predicate they were adding was just "1 = 1." From what I have experienced (and after doing some research) I haven't really found anything where 1 = 1 could really affect a query.
In this case, when we run the explain plan for the query in an environment where the new policies are not present, we get a different explain plan when the "1 = 1" is added into the query.
In a nutshell, this query will select against a large detail table, but it must first check whether detail has been completed (subquery with the vw_header_detail view) and then it will need to check whether the order was modified in the window of time we are looking at (2nd subquery that does an exist clause into 4 different tables). Note: these are all views that reference tables.
If it is completed and it shows that it has been modified in the time window, it goes back to the detail table and pulls all the applicable rows for that seq_id. Which is a very large table.
<code>SELECT *
FROM vw_detail dtl
WHERE EXISTS (SELECT 1
FROM vw_header_detail sts1
WHERE ((sts1.posted_date IS NOT NULL) OR (sts1.secondary_posted_date IS NOT NULL))
AND sts1.seq_id = dtl.seq_id
AND sts1.c_status in ('P','D','C','G','K','A')
AND sts1.p_status in ('S','P','F','O') )
AND (EXISTS (SELECT 1
FROM vw_header hdr
WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date
AND hdr.seq_id = dtl.seq_id)
OR EXISTS (SELECT 1
FROM vw_detail dtl2
WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date
AND dtl2.seq_id = dtl.seq_id)
OR EXISTS (SELECT 1
FROM vw_secondary_header sech
WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date
AND sech.seq_id = dtl.seq_id)
OR EXISTS (SELECT 1
FROM vw_secondary_detail secd
WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date
AND secd.seq_id = dtl.seq_id))
ORDER BY dtl.seq_id, dtl.line_number</code>
The existing query above will return this explain plan:
<code>------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 21 | 9282 | 6282K (1)| 00:04:06 |
| 1 | SORT ORDER BY | | 21 | 9282 | 6282K (1)| 00:04:06 |
| 2 | NESTED LOOPS SEMI | | 21 | 9282 | 6282K (1)| 00:04:06 |
| 3 | NESTED LOOPS | | 21 | 9240 | 6281K (1)| 00:04:06 |
| 4 | VIEW | VW_SQ_1 | 945K| 6463K| 6278K (1)| 00:04:06 |
| 5 | HASH UNIQUE ...
I worked as pure Oracle DBA from 2001 to 2015 and then transitioned to being a Cloud Engineer. One question I am always asked is what it takes to do that transition? How long does it take? How hard it is? In this video I am going to share my journey of transition from Oracle DBA to a Cloud Engineer and share the lessons learned plus some tips to make this transition smooth.
Cloud engineers spend a fair bit of their time in Linux terminals along with GUI tools. After working with various terminals, let me share with you what an ideal Linux terminal for a cloud engineer looks like to me. By ideal, I mean more productive, easy to use and present.
Kubernetes Python client is used to perform operations on Kubernetes resources in the cluster from your Python code. Pre-requisites: - A Running Kubernetes Cluster (Note: I am using AWS EKS cluster)
- Install Kubernetes client for Python with pip by using command: pip install kubernetes
- kubectl utility to verify the resources (Optional)
All the code files used in this video are present in my github repo and the link is in video description.
For code and test output, please see LiveSQL Link
https://livesql.oracle.com/apex/livesql/s/onh4jsczpzaa2bp9t0r9yfkxh
When called with:
exec test_proc(1000, 1, 5)
the output shows 500 vs. 77. Why InDirect Count = 77 calls ?
Statement processed.
---------- Compare test_proc(1000, 1, 5) Function-Calls -------------
Direct Count = 500
InDirect Count = 77
In following calls, we can see that test_proc(183, 0, 5) vs test_proc(184, 0, 5), InDirect Count changed from 50 to 51.
But we can see that test_proc(186, 1, 5) vs test_proc(187, 1, 5), InDirect Count changed from 50 to 51.
Why such call number difference ?
In the <b>Hierarchical Queries: Databases for Developers</b> first recursion example we run the query for the initial case in the recursive loop:
<code> select employee_id, first_name, last_name, manager_id
from employees
where manager_id is null
</code>
Before a union all:
<code>union all</code>
Before then running the part that will repeat until it hits it's termination condition:
<code> select e.employee_id, e.first_name, e.last_name, e.manager_id
from org_chart oc
join employees e
on e.manager_id = oc.employee_id
)
</code>
My question is: <b>Why is the initial anchor value not repeated?</b> Is it not searching for cases where manager id is null for every line?
I would have expected the root to be repeated between every other entry of the data.
Hello, I'm working on a legacy system where programmers abused the use of triggers. I have a hard time identifying the impact that each single commit has on the system. Is there any way to get the rowid of all changed records in all database tables, in the transaction, before committing?
Pages
|