Mathias Magnusson

Subscribe to Mathias Magnusson feed
Changing the world, one line of code at a time
Updated: 22 hours 18 min ago

Looking for AI? You already have it!

Tue, 2024-04-23 08:00
These days I meet many people who talk about how they have a project to find an AI-plattform to use on-premises. Using it in the cloud on Autonomous it is pretty obvious that you use one in a handful of AI-services Oracle has at your disposal there. The question is what you should use to build AI-services on the data in your on-prem Oracle database. You don’t need to get anything. This blog post assumes you have an Oracle database […]

Ignoring errors on “alter session set”

Tue, 2024-04-16 08:00
This tip is especially useful when migrating to autonomous. The reason is that to enable the magic of the the autonomous database there are things that cannot be done. One such thing is that there are a lot of session settings you cannot use. To see what APIs you can no longer use you can look at the documentation. If you look at alter session there are a number of “alter session set …” that are available. But many you […]

EZ Connect Plus

Tue, 2023-11-28 07:00
Sometimes new things that are really useful show up in new versions and you can miss it if you’r not paying close attention. EZ Connect Plus is one such feature for me. You may think it is not new and that it has been there for a long time. You’d be correct EZ Connect is not new, but the Plus moniker is. In version 19 that was added. I never noticed and then I was looking for how to connect […]

Reading data in the IG

Tue, 2023-11-21 07:00
This turned into a miniseries. First it was about focusing on a cell and then on entering the cell in edit mode, but I figure there is a related thing one might want to do. You could want to read the data in a cell once the Interactive Grid (IG) has been loaded. Maybe you need it for some other processing or you want to present something in the IG also in another region. Say that you show the user […]

Access Base DB from your PC

Thu, 2023-11-16 07:00
A recent post talked about setting up a Base DB Service in OCI and finished with logging into the OS of the srvice and there connecting to the DB. What we often want to do is to be able to connect to it directly from our own computer to use the tools and workflow we have in place. This post is about how to set that up. First you head back into OCI and look at the properties of your […]

Entering edit in a cell on pageload

Tue, 2023-11-14 07:00
I recently wrote about how you set focus on a certain cell in an interactive grid. As often is the case, a solution may work well in many cases but there are some cases where something additional is needed. Say that getting one cell in an interactive grid is what you of want, but some cases are so obvious that the user would want to edit a certain cell once the grid is loaded with data. In this case we […]

Creating an Oracle Base DB service

Thu, 2023-11-09 07:00
You may already have one or two free autonomous databases at OCI. Why then would you want a lower tier database offering, a database that is not autonomous? It turns out there are many reasons why you may want this. No matter what your reason is, setting it up is pretty easy. One thing before you start though, you canot stop it to stop spending money. You have to then terminate the service and later recreate it if you want […]

Focusing on a specific cell in an interactive grid

Tue, 2023-11-07 07:00
When an interactive grid is populated with data, most of the time the default setup is great. But for certain applications you know that the user will want to have a certain cell on a certain row in the grid being in focus. Sure you can leave it and just let the user click on it every time. It’s not like a single click is very hard. But helping them be the most productive they can is what most of […]

Bug in oracledb python with wallet – DPI-1032

Mon, 2023-10-09 00:29
If you are working with Python and using the oracledb driver you may suddenly encounter DPI -1032 – “user name and password cannot be set when using external authentication”. The issue is with using a wallet for authentication. I encountered it on a server where the version of the oracedb-driver had been slightly upgraded. It was developed and working fine on oraceldb 1.2.2. On 1.3.0 I could ot get it to work. Upgrading to 1.3.2 got it back in working […]

Let’s talk about ORDS IX

Mon, 2023-05-29 08:00
This post is part of a series that starts with this post. Withy having looked at a bunch of things we can do with something as simple as an Auto-REST enabled view up to and including adding security, I just want to finish off with showing that having a view in the first place is not needed If all you want is a GET-service for a SQL. AUTO-Rest gives you many more things automatically, both like all forms of DML. […]

Let’s talk about ORDS VII

Wed, 2023-05-17 08:00

This post is part of a series that starts with this post.

Having gone through much of what can be done with a basic REST-service earlier in this series, it is time to look at securing the service. When you can access a service with noting noire than just the URL, then so can anyone else that has access to sen toe ORDS-server the URL. Not only can they read, but if the view allows writing then they can do that too as an autoREST view has DML capability too in the generated API.

In the example used for this series it will not work as the view goes against a view Oracle has defined and that cannot be updated by you as a user. However, you will typically want to protect read as well as write for your services. That is what we’ll achieve with securing the service. In this post we’ll secure it from access and in the next we’ll look at accessing the secured service.

To secure a service we create a role, and a privilege and then define what it will protect, in our case we’re protecting a path in the URL.

Let’s start with creating a role. It is done by just giving it a name, nothing else is needed.

Remember, we protect the service in the source database/schema. That is rest_demo with the alias rest_demo_schema in the setup for this series.

exec ords.create_role(p_role_name => 'gnome_role');

Why gnome you ask? Well, there is a certain theme to the images in this series.

Now, the next step is to set up a privilege that is used to protect the service.

begin
  ords.create_privilege(
      p_name        => 'gnome_priv',
      p_role_name   => 'gnome_role',
      p_label       => 'Gnome Data',
      p_description => 'Give access till demo data.');
end;
/

With that all there is left is to define what it should protect. With an AutoREST-enabled view our only option is to define a path for the URL to protect.

begin
  ords.create_privilege_mapping(
      p_privilege_name => 'gnome_priv',
      p_pattern        => '/vw_rest_svc/*');
end;
/

With that the service on that path is now protected. Note that the pattern is within the schema-alias. It starts from that point in the URL so it does not work to have /ords/… That is good as it allows the alias for the schema to be changed without the security being side stepped.

All that is left now is to verify that the service is in deed not available anymore.

curl https://.../ords/rest_demo_schema/vw_rest_svc/ | python3 -m json.tool 
{
    "code": "Unauthorized",
    "message": "Unauthorized",
    "type": "tag:oracle.com,2020:error/Unauthorized",
    "instance": "tag:oracle.com,2020:ecid/039ed419abad226de418d37c6f146756"
}

Great, the service is now protected. In fact, it is so well protected that there is no way to access it. Setting up to allow access is where we pick up in the next post.

Let’s talk about ORDS VI

Thu, 2023-05-11 08:00

This post is part of a series that starts with this post.

It is time to turn to query parameters. It expands on the end point to allow slicing and dicing what we get back. Or in other words it allows projection. Or in layman terms, you can supply a where-clause expressed in jSON. You can also define sorting and much more.

query parameters are basically key-value pairs with an equal sign as the separator. THis sounds much more academic and complex than it is in reality.

Our tests will all have the end-node as its base. That is for our test-case https://.../ords/rest_demo_schema/vw_rest_svc/. All query parameters are added after that.

Limit the number of rows

The default for number of rows returned is 25 as we have seen in most of the previous posts. But let’s change that for a single call and get just the first row. Add limit=1 to the url, We need to start with a question mark to indicate that the http-string now continues with parameters. I prefer to use cURL as it is as bare bones as one can get with http-calls. Remember the python module used for formatting so it get’s easier to read? All of that means that the call would look like this.

curl https://.../ords/rest_demo_schema/vw_rest_svc/\?limit=1| python3 -m json

Note that dues to Linux shell mechanics the question mark has special meanings so I have to escape it by putting a backslash before it. That is only needed on the prompt in Linux so remove it in the web browser or database.

Skip some rows

Next up, let’s skip three rows in the result set and get one. Run the REST-call without any parameters to see all rows so you can verify that the result is correct. Add \?offset=3\&limit=1 to the end node. Note that ampersand also has special meaning so on the prompt you need to escape it also. Look att the next and prev URLs, they are now using the limit you sent in so using them you get the same amount of data for every call you make using those.

Sorting

How about sorting data by the name of the objects? Just add $orderby to the end node. This one is a bit more complex as in has another key-value pair as its value. {$orderby:{object_name:ASC}} says that we want to sort by the object_name in ascending order. We’ve now gotten enough complexity with our command that we need to url-encode the parameter for curl to not interpret it and cause errors.

curl --get https://.../ords/rest_demo_schema/vw_rest_svc/ \
     --data-urlencode 'q={"$orderby":{"object_name":"ASC"}}' \
     | python3 -m json.tool

We specify that GET is the method to use and we request URL-encoding of the parameterstring. curl will replace special charaters with %-encoded values. It then concatenates that to the URL and puts a questionmark before it.

The result is 25 rows (as per the default) where the result is sorted in ascending order, thus the result are objects starting with the letter A.

Sorting + Limit + Offset

Taking the above in and combining them to allow limiting number of rows from a sorted result and skipping the first n rows.

We’re using both –data-encode and –data as the parameter values in –data does not have to be encoded. It cannot be padded onto the string we add to –data-urlencode as it will encode & to encoded values.

curl --get https://.../ords/rest_demo_schema/vw_rest_svc/ \
     --data-urlencode 'q={"$orderby":{"object_name":"ASC"}}' \
     --data '&limit=1&offset=3' | python3 -m json.tool

The result is the fourth row sorted by object_name.

Sorting + Condition + Limit

Next up is a similar variant, only now we want to define the start for the search with a where-clause like condition rather than skipping over a number of rows.

curl --get https://.../ords/rest_demo_schema/vw_rest_svc/ \
     --data-urlencode 'q={"$orderby":{"object_name":"ASC"},"object_name":{"$between":["AD","Z"]}}' \
     --data '&limit=1' | python3 -m json.tool

I use between here to achieve a greater than check. I have not found another filter-criteria that achieves that for strings. For number there is “column”:{“gt”:12345} which is more natural looking for when there is no upper bolunds of interest.

The result is that the same row is returned as with the previous example.

Doing it with PL/SQL

We can of course use the same query parameters with PL/SQL.

set serveroutput on
set define off
declare
  payload clob;

  svc varchar2(200) := 'https://.../ords/rest_demo_schema/vw_rest_svc/';
begin
    dbms_output.put_line('Base:');
    payload := apex_web_service.make_rest_request
       (
         p_url         => svc
       , p_http_method => 'GET'
       );
    dbms_output.put_line(payload);
    dbms_output.put_line(null);

    dbms_output.put_line('Limit:');
    payload := apex_web_service.make_rest_request
       (
         p_url         => svc || '?limit=1'
       , p_http_method => 'GET'
       );
    dbms_output.put_line(payload);
    dbms_output.put_line(null);

    dbms_output.put_line('Limit and Offset:');
    payload := apex_web_service.make_rest_request
       (
         p_url         => svc || '?offset=3&limit=1'
       , p_http_method => 'GET'
       );
    dbms_output.put_line(payload);
    dbms_output.put_line(null);

    dbms_output.put_line('Sorting:');
    payload := apex_web_service.make_rest_request
       (
         p_url         => svc || '?q={"$orderby":{"object_name":"ASC"}}'
       , p_http_method => 'GET'
       );
    dbms_output.put_line(payload);
    dbms_output.put_line(null);

    dbms_output.put_line('Sorting + Limit + Offset:');
    payload := apex_web_service.make_rest_request
       (
       p_url         => svc || '?q={"$orderby":{"object_name":"ASC"}}'
                            || '&offset=3&limit=1'
       , p_http_method => 'GET'
       );
    dbms_output.put_line(payload);
    dbms_output.put_line(null);

    dbms_output.put_line('Sorting + Limit + Condition on PK:');
    payload := apex_web_service.make_rest_request
       (
       p_url         => svc || '?q={"$orderby":{"object_name":"ASC"},'
                            || '"object_name":{"$between":["AD","Z"]}}&limit=1'
       , p_http_method => 'GET'
       );
    dbms_output.put_line(payload);
    dbms_output.put_line(null);
end;
/

That block of code executes the same exact calls as previously done with cURL, and it of course then has the same exact result.

The paging and offset we’ve used poses some challenges when there are lots of data. Each call will read the same rows and just skip over more and more. For such cases you will want to page by using a filter-condition.

Having looked at how the data can be retrieved using query parameter, we’ll take a look at securing the service in the next post.

Let’s talk about ORDS V

Mon, 2023-05-08 08:00

This post is part of a series that starts with this post.

In the last post we covered how to get the JSON-data returned from a REST-call converted back over to be represented as the rows and columns we are used to work with in the database. We’ll now take a look at how paging works and how we work with it to get a view to read all the rows.

Let us first revisit how one reads one chunk of data after another. The default is to get 25 rows in each call. If we look at it with cURL again we can get the first 25 by just referencing the end-point for our service.

curl https://.../ords/rest_demo_schema/vw_rest_svc/ | python3 -m json.tool 

As usual the python part just makes it more readable. You can run just the cURL-command without it and look for the next-url. The next-url will look something like this.

{
    "rel": "next",
    "href": "https://.../ords/rest_demo_schema/vw_rest_svc/?offset=25"
}

If you take the href-value and use it in a cURL-call you get the 25 next rows and you get a new next-url that ends with offset=50. Offset tells the service how many rows from the beginning to skip over before starting to return data. Thus to get all the rows you have to loop and call the next-url until there is no more. If there is more data, the hasMore attribute in the JSON-response will have the value true and if not it has the value false. Thus, loop until hasMore is false and grab all the rows that gets returned in all the calls.

That is exactly what we’ll do with PL/SQL to let us replicate using REST for a database link. We will then have just a view that get’s the user the needed data without them having to even know that we’re using REST.

We start with the infrastructure piece. That is the part that reads the REST-view repeatedly until all rows has been read. we do this from the target schema ( and database) rest_demo_caller.

create or replace package resttest as
  type payload_r is record (rsp clob);
  type payload_l is table of payload_r;

  function page_data return payload_l pipelined;
end resttest;
/

We need a type we can return (pipe) to a SQL. In this simple case we have only a CLOB in the record that then is set up as a type that is a table. The function has no parameters in this example and it returns data pipelined so it can be the source in a SQL.

The implementation is pretty straightforward also.

create or replace package body resttest as
  function page_data return payload_l pipelined is

    payload payload_r;
  
    payload_json json_object_t;

    next_url     varchar2(255) := 
       'https://.../ords/rest_demo_schema/vw_rest_svc/';
  
    more_data    varchar2(5) := 'true';
  begin
    while more_data = 'true' loop
      payload.rsp := apex_web_service.make_rest_request
                     (
                       p_url => next_url
                     , p_http_method => 'GET'
                     );
    
      pipe row(payload);
    
      payload_json := new json_object_t(payload.rsp);
      next_url := treat(payload_json.get_array('links')
                  .get(3) as json_object_t).get_string('href');
    
      more_data := payload_json.get_string('hasMore');
    end loop;
  end page_data;
end resttest;
/

Even if it is straightforward, it may not be obvious what and why so let’s go through the different parts.

On line 4 the payload is declared as a record of the type we declared in the package spec. It will hold the json that is returned from the REST-call.

Line 6 declares payload_json which will be used to deal with thye json-response using json-functions.

Line 8 declares next_url that holds the URL we’ll send in the rest -call.

Line 11 more_data stores the value for hasMore from the last REST-call.

In the begin-end (12-29) everything is a loop on more_data checking if there is yet more data to be fetched.

Line 14 makes the actual REST-call lacing the returned JSON in payload.rsp.

Line 20 is where the record is piped out to the calling SQL.

Line 22 takes the JSON and puts it into a parsed JSON-object. It may look strange if you are new to object-notation syntax. But it creates a new object, nothing more.

Line 23 is a bit complex, but all it does is to navigate the JSON-document. It begins with grabbing the “links” array, in there it takes the fourth element (which is a record) and returns the value for “href”. In plain english it takes the value of the next-url and puts it into the next_url variable. Why does it take the fourth element? PL/SQL starts arrays at 1. True, but this is JSON so it stays true to JSON rather than PL/SQL.

Lastly on line 26 the attribute hasMore is pulled out to the variable more_data. This is so the loop ends when there is no more data to read from the rest-service.

Now we have a function that will return one JSON-document after another from the REST-service as long as we keep fetching them and there is more data to be had. To get this data converted to rows (each JSON-doc has 25 rows) we can use a SQL that looks like the one we used in the last post.

select object_id
     , owner
     , object_name
  from table(resttest.page_data) t
        cross join json_table (t.rsp, '$.items[*]'
                      columns ( object_id   number   path '$.object_id'
                              , owner       varchar2 path '$.owner'
                              , object_name varchar2 path '$.object_name')) j

This is more or less identical with the SQL in the last post with a couple of changes. The select in the beginning of the FROM-clause has been replaced with a call to the function we just defined. This is to keep getting more and more data. The other one is that the first parameter in json_table is now t.rsp. “t” is the same as it is the alias for the data from the REST-call. But rsp used to be response, it changed due to how the column in the record was named.

With this in place we can now create a view based on this sql to allow us to select the data we want using a plain SQL.

create view the_view as
select object_id
     , owner
     , object_name
  from table(resttest.page_data) t
        cross join json_table (t.rsp, '$.items[*]'
                      columns ( object_id   number   path '$.object_id'
                              , owner       varchar2 path '$.owner'
                              , object_name varchar2 path '$.object_name')) j
;

You can now give a user access the the view “the_view” and they can get the data using this simple SQL.

select * from the_view;
OBJECT_ID OWNER  OBJECT_NAME 
      134 SYS    ORA$BASE    
      143 SYS    DUAL        
      144 PUBLIC DUAL        
      441 PUBLIC MAP_OBJECT  

The user does not need to know that the data is from json/rest or anything. To them it is just data they can work with. Pretty neat.

In the next post we’ll take a look at query-parameters. It is a complex area and we will just scratch the surface. But we can use it for a lot of things even with our simple setup of a rest enabled view.

Lets talk about ORDS IV

Fri, 2023-05-05 08:00

This post is part of a series that starts with this post.

Having seen REST-calls from the database to get the raw JSON-response it is time to look at how it can be transformed into separate columns.

This is the SQL we use to get data split up.

select object_id
     , owner
     , object_name
  from (select apex_web_service.make_rest_request
            (
              p_url => 'https://.../ords/rest_demo_schema/vw_rest_svc/'
            , p_http_method => 'GET'
            ) response
          from dual) t
        cross join json_table (t.response, '$.items[*]'
                      columns ( object_id   number   path '$.object_id'
                              , owner       varchar2 path '$.owner'
                              , object_name varchar2 path '$.object_name')) j
;
OBJECT_ID OWNER  OBJECT_NAME             
--------- ------ ----------------------- 
      134 SYS    ORA$BASE                
      143 SYS    DUAL                    
      144 PUBLIC DUAL                    
      441 PUBLIC MAP_OBJECT              
      543 SYS    SYSTEM_PRIVILEGE_MAP    
      544 SYS    I_SYSTEM_PRIVILEGE_MAP  
      545 PUBLIC SYSTEM_PRIVILEGE_MAP    
      546 SYS    TABLE_PRIVILEGE_MAP     
      547 SYS    I_TABLE_PRIVILEGE_MAP   
      548 PUBLIC TABLE_PRIVILEGE_MAP     
      549 SYS    USER_PRIVILEGE_MAP      
      550 SYS    I_USER_PRIVILEGE_MAP    
      551 PUBLIC USER_PRIVILEGE_MAP      
      552 SYS    STMT_AUDIT_OPTION_MAP   
      553 SYS    I_STMT_AUDIT_OPTION_MAP 
      554 PUBLIC STMT_AUDIT_OPTION_MAP   
      705 SYS    FINALHIST$              
     1401 SYS    DM$EXPIMP_ID_SEQ        
     1417 SYS    MODELGTTRAW$            
     1716 SYS    AV_DUAL                 
     1790 SYS    STANDARD                
     1792 SYS    DBMS_STANDARD           
     1793 PUBLIC DBMS_STANDARD           
     1794 SYS    PLITBLM                 
     1795 PUBLIC PLITBLM                 

That looks just like the output from an ordinary table. So we have wrapped it up on one side into a REST formatted response and on the receiving end we unpack it into columns again. Pretty neat.

But we only got 25 rows. We’ll get back to why that is. Let us address the different parts in the SQL to understand what goes on here.

Line 4-9 are similar to what we have seen before. It’s an inline view selecting the data from a REST-endpoint. It gives the json that is returned the alias “response” and the inline view an alias of “t” so it can be referenced later.

Line 10-13 is where the magic happens. The cross join is the effect of a join with now where clauses. Each row from the inline view (there will be just one) is joined with the json_table function. In it we first define what json we want to use as input, t.response is what the online view returned. $.items[*] references all rows in the items array. $ means the root of the json, items is the name of the array, brackets are how individual rows in an array are addressed and * say that we want all the rows. The columns-section picks out columns from the json in each entry in the array. The structure is column name we want, datatype, data element in the json-element (each row in the json-array).

Line 1-3 is noting we have not seen before. You can have * there but you get more then. The inline view returns a “column” t.response that has the raw json. When you have converted that into separate columns you will usually not want that returned anymore so you need to name the columns you do want.

That completes this post where we looked at how to go from returned JSON to get the data as the columns they were in the source. In the next post we’ll take a look at paging and how to get all rows returned to a SQL or via a view.

Let’s talk about ORDS III

Thu, 2023-05-04 08:00

This post is part of a series that starts with this post.

Now that we have seen various ways to get data and validate the REST-service working we can move on to implement access from within an Oracle database. I find validating outside the database as the first step to be very valuable. If you have one database that has the rest enabled view, the rest service and then a view in another database reading from that view setup in one go and then it does not work right – there are so many moving parts and som much abstraction that it makes troubleshooting very hard.

Once you can get it to work in cURL, it is usually straightforward to access the same endpoint from within a database.

First we need a user that can make the call to the rest service.

create user rest_demo_caller identified by REST_1234_caller
  default tablespace data
  temporary tablespace temp;

grant create session   to rest_demo_caller;
grant create view      to rest_demo_caller;
grant create procedure to rest_demo_caller;

Now we can log in as rest_demo_caller and test the rest service. Note that the user having the rest enabled view has a separate name from this user that will invoke it. Typically you’d access a rest-service from one database that accesses another. But it is setup here to allow testing it using just one database.

set serveroutput on
declare
  payload clob;
begin
    payload := apex_web_service.make_rest_request
        (
          p_url => 'https://.../ords/rest_demo_schema/vw_rest_svc/'
        , p_http_method => 'GET'
        );
    dbms_output.put_line(payload);
end;
/

The result will be identical to that we got from the cURL-call. Just returned in your SQL-tool of choice. I’m using apex_web_service as it simplifies things, we’ll not build an APEX-app here were just using some of its neat infrastructure packages.

If you get an error referring to ACL or access control then you have not granted the database access to make a call to the ORDS-server. This will not happen in an autonomous database, but if you run roll-your-own it may happen. Then modify the following according to your environment and run in the database you want to make the REST-call from.

declare
  l_principal varchar2(20) := 'APEX_210200';
begin
  dbms_network_acl_admin.create_acl (
    acl          => 'ords_acl.xml', 
    description  => 'My ORDS-server',
    principal    => l_principal,
    is_grant     => true, 
    privilege    => 'connect',
    start_date   => systimestamp,
    end_date     => null);

  dbms_network_acl_admin.assign_acl (
    acl         => 'ords_acl.xml',
    host        => '<The name of you ordsserver goes here', 
    lower_port  => 80,
    upper_port  => 80); 

  commit;
end;
/

Modify it so it references your APEX-version. If you have not installed it you’ll have to look at alternative ways to make the rest-call. Then the name of your ORDS-server is what you enter in the host-parameter. Just the host, no prefixing hit http or adding patch to the end. Only the name of the server.

Back to accessing the service from the database. So we have seen that we can get the JSON-response with PL/SQL. Let’s now do the same with SQL.

select apex_web_service.make_rest_request
            (
              p_url => 'https://.../ords/rest_demo_schema/vw_rest_svc/'
            , p_http_method => 'GET'
            )
  from dual;
/

Again we get the identical output only now returned as a query result.

We could of course wrap this into a view and have it returned. But it is still just a raw JSON and we would of course want to get the data split up into the rows it is fetching from the rest-enabled view. The next post will be spent on looking a bit at how to get from raw JSON to get the data represented as separate columns.

Let’s talk about ORDS II

Wed, 2023-05-03 08:00

This post is part of a series that starts with this post.

Now that we have a restenabled view and we can get data from ut with a basic cURL call, let’s look at more ways to make that call and view the response.

But before those calls, lets look at what data we actually get from the REST-call. If we format the output from the cURL-call it looks something like this.

{"items":[......]
,"hasMore":true
,"limit":25
,"offset":0
,"count":25
,"links":
  [{"rel":"self".      ,"href":"https://.../ords/rest_demo_schema/vw_rest_svc/"}
  ,{"rel":"describedby","href":"https://.../ords/rest_demo_schema/metadata-catalog/vw_rest_svc/"}
  ,{"rel":"first".     ,"href":"https://.../ords/rest_demo_schema/vw_rest_svc/"}
  ,{"rel":"next".      ,"href":"https://.../ords/rest_demo_schema/vw_rest_svc/?offset=25"}
  ]
}

Items is where the arrays of rows returned is kept. I cut out all of that out for brevity. hasMore shows if I can read more data from the service. Limit how much data was requested, offset if data was skipped over before reading and count how many was actually returned. Offset may need a bit more explanation. IF you make a request and then you ask for the next set of 25, then offset will be 25 top show that the requested data starts at row 26. We’ll see that in action later.

Links are misc links to the data. Without hyperlinks in a response, the API is not REST. That is not me saying that, it is how REST is defined.

if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period.

Roy T. Fielding

Who is he and why does he has a say in what is REST you may ask. He is the primary architect of the Hypertext Transfer Protocol (HTTP/1.1), coauthor of the Internet standards for HTTP and Uniform Resource Identifiers (URI), and a founder of several open source software projects (including the Apache HTTP Server Project that produces the software running most Web servers). On top of that he write the distention that established REST. This part is often referred to as HATEOAS. Further reading about that is just a google-search away.

Back to this from that academic tangent. The links will give you a link to the data returned here, next page, metadata about the service, next set of rows, the first set of rows. If you follow the next, link the result will also have a link for previous set of rows.

With the look at what the response contains out of the way, let’s look at a few more ways to access it.

Let’s begin with letting python format the output for us.

curl https://.../ords/rest_demo_schema/vw_rest_svc/| python3 -m json.tool

That makes the response much more readable, but of course takes up much more space hight-wise. Still, to read and understand it you need something other than the raw response.

Now let’s take the same URL and put it into a web-browser.

If your web browser is set up to format json you will get a response similar to the python one. The links are even active links. Click on next and you get to see the next set of 25 rows. For this tow work you may need an extension in you web-browser. I have JSON Formatter for Chrome. Search for it for your web browser and you will get several to chose from.

In the next post we will take what we did in this and set up in the database to achieve the same with SQL and PL/SQL.

Let’s talk about ORDS I

Tue, 2023-05-02 08:00

I recently started looking into a problem at work with getting security added to our setup with views making REST-calls to work as if they were served with a database link. In that work I started from scratch adding one piece at a time to make sure I could troubleshoot it. In doing so I realised that we often talk about the amazing magic of ORDS, but how does one break it down to understand and troubleshoot the smoke and mirrors. What I needed to break down the security setup ended up being what I needed to show my colleagues about what makes ORDS work. It also lends itself to a step by step series on blog-form.

For the blog I’ll convert the on-prem demo to use OCI as everyone can have a free database and ORDS setup there with just a couple of clicks. Installing and getting ORDS working is easy, but not having to do it is easier.

I’m using my free Autonomous Database for the examples. You can do that same with other OCI DB services or on-prem / VM /Docker but you may need to make some adjustments here and there.

You will need to get your base endpoint for the tests.

  • On cloud.oracle.com, navigate to your Autonomous database.
  • Open Database Actions.
  • Click on the hamburger menu and click on RESTful services and SODA under related services.
  • Copy and save somewhere the URL that pops up. The one I get looks like “https://ioxxxxxxxxxxxss-evilatp.adb.eu-frankfurt-1.oraclecloudapps.com/ords/”

No, that is not really the name of my tenacy, but close enough. You can verify that it works by issuing this cURL-command.

curl -v https://ioxxxxxxxxxxxss-evilatp.adb.eu-frankfurt-1.oraclecloudapps.com/ords/

You’d of course paste the URL you got there instead of my doctored one. It should return quite a bit of text and among other things some lines looking like these towards the end.

< Set-Cookie: ORA_WWV_USER_9274378038389796=ORA_WWV-YbsoJGtYZdGRRlWv-eFSHkQB; path=/ords/; samesite=none; secure; HttpOnly
< Set-Cookie: ORA_WWV_RAC_INSTANCE=2; path=/ords/; samesite=none; secure; HttpOnly
< Location: https://ioxxxxxxxxxxxss-evilatp.adb.eu-frankfurt-1.oraclecloudapps.com/ords/f?p=4550:1:102000128053637:::::

If you get those cookies and a location looking much like APEX it worked and we can move on with ORDS and REST.

Now let’s set up the base part of getting data from a view in the database via REST. Begin with setting up the user logged in with a privileged user.

create user rest_demo identified by Demo_Rest_1234
  default tablespace data
  temporary tablespace temp
  quota unlimited on data;

grant create session   to rest_demo;
grant create procedure to rest_demo;
grant create view      to rest_demo;

begin
    ords.enable_schema( p_enabled => true
                      , p_schema => 'REST_DEMO'
                      , p_url_mapping_type => 'BASE_PATH'
                      , p_url_mapping_pattern => 'rest_demo_schema'
                      , p_auto_rest_auth => false);
    commit;
end;

With that we have a user rest_demo that has access to create procedures and views and the schema is enabled for REST-access.

Now, log in with the user REST_DEMO and create a rest-enabled view.

create view vw_rest as
  select object_id
       , owner
       , object_name
    from all_objects;

begin
    ords.enable_object(p_enabled => true,
                       p_schema => 'REST_DEMO',
                       p_object => 'VW_REST',
                       p_object_type => 'VIEW',
                       p_object_alias => 'vw_rest_svc',
                       p_auto_rest_auth => false);
    commit;
end;

We have now set up a view that we can access with rest. To wrap up this blog post, let’s just try it by adding to the cURL-command we used before.

curl https://.../ords/rest_demo_schema/vw_rest_svc/
{"items":[{"object_id":134,"owner":"SYS","object_name":"ORA$BASE"},
<... snip ...>
{"object_id":1795,"owner":"PUBLIC","object_name":"PLITBLM"}]
,"hasMore":true,"limit":25,"offset":0,"count":25
,"links":[< ... snip ... >]} 

That is it, we have a rest service we can call from any environment that can issue a REST-call to get to our data. Pretty impressive to get REST with just a few lines of DDL. The cURL-call has rest_demo_schema and vw_rest_svc to identify schema and view. It is not the name of either but rather aliases to not expose the real names in the database. You can see in the commands above where we gave them alternative names.

That is it for this post. In the next post we’ll look at more ways to make the same rest call and to see the response in a more readable format.

Using package variables in SQL

Thu, 2023-04-06 08:00

I often want to use the constants defined in packages also in my SQL. It has in the past never worked and it for the most part still does not.

If I have a a package like this to simplify my code for what department it is I’m looking at.

create package dept_info as
  c_accounting constant scott.dept.dname%type := 10;
  c_research   constant scott.dept.dname%type := 20;
  c_sales      constant scott.dept.dname%type := 30;
  c_operations constant scott.dept.dname%type := 40;
end dept_info;

This now allows using dept_info.c_accounting in PL/SQL instead of just hardcoding 10 and having to remember what department 10 is. This is how we usually use PL/SQL and especially with Oracles packages where they often provide constants for ease-of-use.

However if I now try to use it in a SQL, it will not work.

select * 
  from scott.emp
 where deptno = dept_info.c_accounting;
ORA-06553: PLS-221: 'C_ACCOUNTING' is not a procedure or is undefined

Instead of that descriptive reference to the accounting department I have to resort to this.

select * 
  from scott.emp
 where deptno = 10;

Then the other day I realised that a feature that is way underused and typically used for allowing declaring programmatic logic in SQL can be used to get access to the constant in the SQL-statement. The with construct can nowadays define a function. That function can return a constant from a package.

with
  function c_accounting return number is
    begin
      return dept_info.c_accounting;
    end;
select * 
  from scott.emp
 where deptno = c_accounting
/

Pretty nifty, eh? Notice that such SQL cannot be terminated with semicolon as semicolon is used to delimit statements in the PL/SQL so you have to terminate the SQL with a slash in SQL*Plus or SQLcl. Livesql on the other hand parses it just fine if it is terminated with a semicolon.

This is a feature I’m sure will soon be in many SQLs I write.

Change the compatability mode in APEX

Thu, 2023-03-23 08:00

I ran across a funny thing today in an application I built 10 yesrs ago but that I have not been involved in maintaining since then. The application is gettings som much needed UI upgrades, like getting out of the dark ages and using universal theme.

During that work there were a set of things that didn’t work, components could nbot be found and some unexpected things happened. Unexpected compared to the behavior that had been experienced while doing a mock app in the private environment in the VM.

It turns out the application still has the compatibility mode set to 4.1. Yes, no wonder it was behaving different than when one has 22.x.

This is set during upgrade if upgrading from a version to another where the new has a change in behavior. You’ll want to look into what the change behavior is and make sure your application works well with it so it is on the latest version.

You find this setting in:

Application Properties -> Definition -> Properties -> Compatibility Mode

Oracle has a running lit of the changed behavior version by version in the documentation. That is great so one does not have to find the docs for each version that has changed behavior to find it. It is in one convenient place.

https://docs.oracle.com/en/database/oracle/apex/22.2/htmrn/index.html#HTMRN-GUID-712BE54F-08CD-43A3-A645-87B9360ED516

It goes through the changes for each version where the behavior of the APEX engine changed, starting with 4.1 all the way up to the current version. Very convenient to be able to read up on them and see what one should expect and what to check after changing this setting.

As evidenced by this setting in this application having been forgotten it is worth validating that applications you have upgraded has this set to the value you expect. For me I’d expect the latest unless the work to verify that it works as intended has not been completed.

You ought to have oracledb in your tool chest

Tue, 2023-03-21 08:00

No I’m not talking about the Oracle Database. It is extremely unlikely that you found your way here if it isn’t already to some extent your specialisation. If you made it here and have no idea what this is about, drop a comment and let me know your path here if that were to be the case.

No, this is about Python. Even more it is about the OracleDB module for python. That capitalisation is mine, the official way to write it is Python-oracledb. Either way it is Oracle’s implementation of the Python Database API 2.0 specification.

I know what some of you may be thinking. Why would I need that, I’ve got my old trusted cx_Oracle driver. Yes you do and you have nothing to fear, Python-oracledb is the new name for it. You’ll feel right at home.

Let’s start with recognising what a beautiful API. To me this is an API that is equally natural to use from Python and database perspectives. I think it is not just due to Oracle’s efforts but also for the general API specification that it follows. I often find the interface between language and SQL more natural in Python than even in PL/SQL. Yes, that is probably close to an offensive statement for some. But try it before you diss it.

Speaking of Python, if you too work with developers that mainly write code in Java, you will want to be able to write simple stubbs of code to access the database and achieve the things you want them to implement to help you do your job. Most of the time it is enough with some cookie cutter code and just add what you need.

I use Python the same way but for slightly different reasons. I use it to show myself a concept of the work one has to do when accessing the database from outside the database. It also includes things that makes many things easy to test out that for me is beyond cumbersome to set up in Java. In addition Python is a language you want to have in your toool chest anyway as it is the preferred language for data these days.

Let’s rig a small test. Begin with installing the module.

python -m pip install oracledb --upgrade --user

Test getting a connection with som boiler plats code.

import oracledb

with oracledb.connect(user='<USER>', password='<PASSWORD>', dsn='CONN STRING>') as connection:
    with connection.cursor() as cursor:
        sql = """select sysdate from dual"""
        for r in cursor.execute(sql):
            print(r)

As one would expect it just prints the sysdate. However this blog is not about examples for coding with oracledb, rather I think it is worthwhile to just finish up with highlighting som of the features it has to show the kind of things you can try out with just a few more liens of code.

The plain way of using it uses the thin driver which mimics how most application servers connect. If you want to you can set up for using thick mode also. It is required for som advanced usage scenarios.

Some of the features I think are of particular interest:

  • Advanced Queueing including receiving database notifications.
  • Support for EBR
  • Supporting setting context when creating a connection to allow for better instrumentation in the database.
  • Support for E2E monitoring and tracing
  • Database Resident Connection Pooling
  • ref cursors
  • Array-operations
  • JSON

And the list goes on.

One final thing I find very interesting. It has an API to set up a connection pool so you can use it to model use cases for how a connection pool would behave. It also allows you to do things like build a REST-server with it’s own connection pool to service parallell requests to your REST-server.

Take it out for a spin, I’m sure you’ll enjoy it.

Pages