I'm working on designing an OIDC Relying Party (SP), which should work with most of the popular OIDC Providers (IDPs). I requested to allow authentication and authorization also for clients that are not web applications. Is it recommended to work with OIDC in case there is no browser in the client? Which flow is the most recommended for this case? Are the most IDPs support such flow (with no browser)?
Many OpenID Connect providers use some form of "Device Flow"
This is one example https://auth0.com/blog/oauth-device-flow-no-hassle-authentication-as-seen-on-tv/
Google for Device Flow and you can find many.
There's a draft specification for Client Initiated Backchanel Authentication (CIBA) flow (https://openid.net/specs/openid-client-initiated-backchannel-authentication-core-1_0.html).
This would work from a device which doesn't have a browser. Essentially the client makes an authentication request and the OpenID Provider (OP) authenticates using an Authentication Device (AD), usually a smartphone.
When the user is authenticated the client recieves the tokens either by poll, ping or push.
From the docs the flows look like this...
CIBA Poll Mode is illustrated in the following diagram:
+--------+ +--------+
| | | |
| |<---(1) CIBA Request-------------------------->| |
| | | |
| | +--------+ | |
| | | | | |
| Client | | AD |<--(2) User interactions---------->| OP |
| | | | | |
| | +--------+ | |
| | | |
| |----(3a) CIBA Polling Request----------------->| |
| |<---(3b) CIBA Polling Response-----------------| |
| | ... | |
| |----(3a) CIBA Polling Request----------------->| |
| |<---(3b) CIBA Polling Response-----------------| |
| | | |
+--------+ +--------+
CIBA Ping Mode is illustrated in the following diagram:
+--------+ +--------+
| | | |
| |<---(1) CIBA Request-------------------------->| |
| | | |
| | +--------+ | |
| | | | | |
| Client | | AD |<--(2) User interactions---------->| OP |
| | | | | |
| | +--------+ | |
| | | |
| |<---(3) CIBA Ping Callback---------------------| |
| | | |
| |----(4a) CIBA Token Request------------------->| |
| |<---(4b) CIBA Token Response-------------------| |
+--------+ +--------+
CIBA Push Mode is illustrated in the following diagram:
+--------+ +--------+
| | | |
| |<---(1) CIBA Request-------------------------->| |
| | | |
| | +--------+ | |
| | | | | |
| Client | | AD |<--(2) User interactions---------->| OP |
| | | | | |
| | +--------+ | |
| | | |
| |<---(3) CIBA Push Callback---------------------| |
| | | |
+--------+ +--------+
Related
I have a table with following columns (The email address are comma separated):
+---------+----------+------------+---------------------------------------------+---------+
| Sr. No. | Domain | Store Name | Email | Country |
+---------+----------+------------+---------------------------------------------+---------+
| 1. | kkp.com | KKP | den#kkp.com, info#kkp.com, reno#kkp.com | US |
| 2. | lln.com | LLN | silo#lln.com | UK |
| 3. | ddr.com | DDR | info#ddr.com, dave#ddr.com | US |
| 4. | cpp.com | CPP | hello#ccp.com, info#ccp.com, stelo#ccp.com | CN |
+---------+----------+------------+---------------------------------------------+---------+
I want the output with Email in separate Columns:
+---------+----------+------------+---------------+---------------+---------------+---------+---------+
| Sr. No. | Domain | Store Name | Email 1 | Email 2 | Email 3 | Email N | Country |
|---------+----------+------------+---------------+---------------+---------------+---------+---------+
| 1. | kkp.com | KKP | den#kkp.com | info#kkp.com | reno#kkp.com | ....... | US |
| 2. | lln.com | LLN | silo#lln.com | | | ....... | UK |
| 3. | ddr.com | DDR | info#ddr.com | dave#ddr.com | | ....... | US |
| 4. | cpp.com | CPP | hello#ccp.com | info#ccp.com | stelo#ccp.com | ....... | CN |
+---------+----------+------------+---------------+---------------+---------------+---------+---------+
Can someone please help a beginner in SQL and BigQuery.
I'm trying to run a query in BigQuery and store the results in Cloud Storage. This is rather straight forward to do using BigQueries API.
An issue comes up when I try to do this with multiple queries concurrently. "Extracting" the result table to Cloud Storage slows down significantly the more tables I try to extract. Here's a summary result of an experiment I did for 20 concurrent jobs. Results are in seconds.
job 013 done. Query: 012.0930221081. Extract: 009.8582818508. Signed URL: 000.3398022652
job 000 done. Query: 012.1677722931. Extract: 010.7060177326. Signed URL: 000.3358650208
job 002 done. Query: 009.5634860992. Extract: 014.2841088772. Signed URL: 000.3027939796
job 004 done. Query: 011.7068181038. Extract: 012.5938670635. Signed URL: 000.2734949589
job 020 done. Query: 009.8888399601. Extract: 015.4054799080. Signed URL: 000.3903510571
job 022 done. Query: 012.9012901783. Extract: 013.9143507481. Signed URL: 000.3490731716
job 014 done. Query: 012.8500978947. Extract: 015.0055649281. Signed URL: 000.2981300354
job 006 done. Query: 011.6835210323. Extract: 016.2601530552. Signed URL: 000.2789318562
job 001 done. Query: 013.4435272217. Extract: 015.2819819450. Signed URL: 000.2984759808
job 005 done. Query: 012.0956349373. Extract: 018.9619371891. Signed URL: 000.3134548664
job 018 done. Query: 013.6754779816. Extract: 020.0537509918. Signed URL: 000.3496448994
job 011 done. Query: 011.9627509117. Extract: 025.1803772449. Signed URL: 000.3009829521
job 008 done. Query: 015.7373569012. Extract: 136.8249070644. Signed URL: 000.3158171177
job 023 done. Query: 013.7817242146. Extract: 148.2014479637. Signed URL: 000.4145238400
job 012 done. Query: 014.5390141010. Extract: 151.3171939850. Signed URL: 000.3226230145
job 007 done. Query: 014.1386809349. Extract: 160.1254091263. Signed URL: 000.2966897488
job 021 done. Query: 013.6751790047. Extract: 162.8383400440. Signed URL: 000.3162341118
job 019 done. Query: 013.5642910004. Extract: 163.2161693573. Signed URL: 000.2765989304
job 003 done. Query: 013.8807480335. Extract: 165.1014308929. Signed URL: 000.3309218884
job 024 done. Query: 013.5861997604. Extract: 182.0707099438. Signed URL: 000.3331830502
job 009 done. Query: 013.5025639534. Extract: 199.4397711754. Signed URL: 000.4156360626
job 015 done. Query: 013.7611100674. Extract: 230.2218120098. Signed URL: 000.2913899422
job 016 done. Query: 013.4659759998. Extract: 285.7284781933. Signed URL: 000.3109869957
job 017 done. Query: 019.2001299858. Extract: 322.5298812389. Signed URL: 000.2890429497
job 010 done. Query: 014.7132742405. Extract: 363.8596160412. Signed URL: 000.6748869419
A job does three things
Submits a query to BigQuery
Extracts the results table to Cloud Storage
Generate a Signed URL of the blob in Cloud Storage
As the results show, the first group of Extracts takes 9 - 25 seconds, after that it starts taking much longer.
Any ideas on why this is happening? Is this the reason? https://cloud.google.com/storage/docs/request-rate
Is there any way of fixing this?
EDIT: Here's some additional information I discovered.
| job | Local Extract timed | Google Extract timed | Google's Extract started | Google's Extract ended | Local Extract start | Local Extract start |
| --- | ------------------- | -------------------- | ------------------------ | ---------------------- | ------------------- | ------------------- |
| 026 | 009.26328 | 008.84300 | 13:39:00.441000 | 13:39:09.284000 | 07:39:00.235970 | 07:39:09.498784 |
| 009 | 011.52299 | 008.04000 | 13:39:00.441000 | 13:39:08.481000 | 07:39:00.234297 | 07:39:11.756788 |
| 004 | 010.35730 | 008.66700 | 13:39:03.436000 | 13:39:12.103000 | 07:39:03.240466 | 07:39:13.597328 |
| 011 | 011.86404 | 009.29900 | 13:39:03.055000 | 13:39:12.354000 | 07:39:02.893600 | 07:39:14.756887 |
| 006 | 012.50416 | 011.75400 | 13:39:02.854000 | 13:39:14.608000 | 07:39:02.623032 | 07:39:15.126790 |
| 000 | 013.30535 | 008.77000 | 13:39:02.056000 | 13:39:10.826000 | 07:39:01.863548 | 07:39:15.168434 |
| 002 | 011.47199 | 008.53700 | 13:39:04.443000 | 13:39:12.980000 | 07:39:04.236455 | 07:39:15.708005 |
| 032 | 015.68229 | 009.69200 | 13:39:02.915000 | 13:39:12.607000 | 07:39:02.768185 | 07:39:18.450160 |
| 001 | 017.46480 | 009.35800 | 13:39:01.313000 | 13:39:10.671000 | 07:39:01.071540 | 07:39:18.535896 |
| 012 | 019.02242 | 008.65700 | 13:39:00.903000 | 13:39:09.560000 | 07:39:00.727101 | 07:39:19.749070 |
| 018 | 016.95632 | 009.75800 | 13:39:03.259000 | 13:39:13.017000 | 07:39:03.080580 | 07:39:20.036199 |
| 019 | 017.24428 | 008.51100 | 13:39:03.773000 | 13:39:12.284000 | 07:39:03.575118 | 07:39:20.819042 |
| 008 | 019.55018 | 009.83600 | 13:39:02.110000 | 13:39:11.946000 | 07:39:01.905548 | 07:39:21.455273 |
| 023 | 016.64131 | 008.94500 | 13:39:05.282000 | 13:39:14.227000 | 07:39:05.041235 | 07:39:21.682086 |
| 017 | 019.39104 | 007.12700 | 13:39:03.118000 | 13:39:10.245000 | 07:39:02.896256 | 07:39:22.286485 |
| 020 | 019.96283 | 010.05000 | 13:39:03.115000 | 13:39:13.165000 | 07:39:02.942562 | 07:39:22.904864 |
| 036 | 022.05831 | 010.51200 | 13:39:02.626000 | 13:39:13.138000 | 07:39:02.461061 | 07:39:24.518903 |
| 024 | 028.39538 | 008.79600 | 13:39:05.151000 | 13:39:13.947000 | 07:39:04.916194 | 07:39:33.311248 |
| 007 | 107.36010 | 010.68900 | 13:40:31.555000 | 13:40:42.244000 | 07:39:03.050049 | 07:40:50.409359 |
| 028 | 120.63134 | 009.52400 | 13:40:49.915000 | 13:40:59.439000 | 07:39:02.941202 | 07:41:03.572094 |
| 033 | 120.78268 | 009.54200 | 13:40:27.147000 | 13:40:36.689000 | 07:39:04.152378 | 07:41:04.934602 |
| 037 | 122.64949 | 008.80400 | 13:40:33.298000 | 13:40:42.102000 | 07:39:06.500587 | 07:41:09.149629 |
| 035 | 125.35254 | 009.13200 | 13:40:27.600000 | 13:40:36.732000 | 07:39:04.295941 | 07:41:09.647836 |
| 015 | 139.13287 | 011.17800 | 13:40:27.116000 | 13:40:38.294000 | 07:39:03.406321 | 07:41:22.538701 |
| 029 | 141.21037 | 008.23700 | 13:40:24.271000 | 13:40:32.508000 | 07:39:03.816588 | 07:41:25.026438 |
| 013 | 145.94239 | 009.19400 | 13:40:33.809000 | 13:40:43.003000 | 07:39:03.375451 | 07:41:29.317454 |
| 039 | 149.92807 | 009.72300 | 13:40:33.090000 | 13:40:42.813000 | 07:39:03.635156 | 07:41:33.562607 |
| 016 | 166.26505 | 010.12000 | 13:40:39.999000 | 13:40:50.119000 | 07:39:03.383215 | 07:41:49.647907 |
| 010 | 210.61908 | 011.37900 | 13:42:20.287000 | 13:42:31.666000 | 07:39:03.702486 | 07:42:34.321079 |
| 027 | 227.83011 | 010.00900 | 13:42:25.845000 | 13:42:35.854000 | 07:39:02.953435 | 07:42:50.783106 |
| 025 | 228.48326 | 009.71000 | 13:42:20.845000 | 13:42:30.555000 | 07:39:03.673122 | 07:42:52.155934 |
| 022 | 244.57685 | 010.06900 | 13:42:53.712000 | 13:43:03.781000 | 07:39:03.963936 | 07:43:08.540307 |
| 021 | 263.74717 | 009.81400 | 13:42:40.211000 | 13:42:50.025000 | 07:39:04.505016 | 07:43:28.251864 |
| 031 | 273.96990 | 008.55100 | 13:43:18.645000 | 13:43:27.196000 | 07:39:03.618419 | 07:43:37.587862 |
| 034 | 280.96174 | 010.53300 | 13:42:58.364000 | 13:43:08.897000 | 07:39:04.313498 | 07:43:45.274962 |
| 030 | 281.76029 | 008.27100 | 13:42:49.448000 | 13:42:57.719000 | 07:39:03.832644 | 07:43:45.592592 |
| 005 | 288.15577 | 009.85300 | 13:43:04.825000 | 13:43:14.678000 | 07:39:04.006553 | 07:43:52.161888 |
| 003 | 296.52279 | 009.65300 | 13:43:24.041000 | 13:43:33.694000 | 07:39:03.831264 | 07:44:00.353715 |
| 038 | 380.01783 | 008.45000 | 13:44:57.326000 | 13:45:05.776000 | 07:39:03.055733 | 07:45:23.073209 |
| 014 | 397.05841 | 008.99800 | 13:44:48.577000 | 13:44:57.575000 | 07:39:03.132323 | 07:45:40.190302 |
The table shows the amount of time I have to wait locally to run my jobs, and shows how long Google takes to do my jobs. Looking at the times, it shows that it doesn't take very long for Google to perform the extract, but it won't run the jobs at the same time, and thus will force some extracts to wait a few minutes before starting.
You're correct, there is currently an internal limit on how fast export jobs are processed internally. This was originally put in to protect the system of too many long and expensive exports running in parallel. However as you noted, this limit doesn't seem to help in your case where you have many export jobs all completed within 1 minutes.
We have an open (internal) bug to address this to make the situation better for smaller exports like yours. In the mean time, if you think you're blocked by this, file a bug or let me know your project ID, we can help raise the limit for your project.
I have this current view (SQL Query already developed):
| Application Control ID | Application Name | System |
-----------------------------------------------------------
| A0037 | ABR_APP1 | ABR |
| A1047 | ABR_APP2 | ABR |
| A2051 | ABR_APP3 | ABR |
| A2053 | ABR_APP4 | ABR |
| A1909 | ABR_APP5 | ABR |
| A0032 | AIS_APP1 | AIS |
| A0029 | AIS_APP2 | AIS |
| A0030 | AIS_APP3 | AIS |
| A0039 | AOL_APP1 | AOL |
| A0038 | AOL_APP2 | AOL |
I need to change it to this:
| Application Control ID | Application Name | System |
------------------------------------------------------
| S0001 | [blank] | ABR |
| A0037 | ABR_APP1 | ABR |
| A1047 | ABR_APP2 | ABR |
| A2051 | ABR_APP3 | ABR |
| A2053 | ABR_APP4 | ABR |
| A1909 | ABR_APP5 | ABR |
| S0002 | [blank] | AIS |
| A0032 | AIS_APP1 | AIS |
| A0029 | AIS_APP2 | AIS |
| A0030 | AIS_APP3 | AIS |
| S0003 | [blank] | AOL |
| A0039 | AOL_APP1 | AOL |
| A0038 | AOL_APP2 | AOL |
The datamart tables in our knowledge management system are as follows:
- ATO_ATO_Application
- ATO_ATO_System
- ATO_APPLICATION_TO_SYSTEM
Question: Those sub-headers (S0001..., S0002..., S0003...) – are they easy to add to the view through SQL query programming?
- Somehow I need to show these “sub-header” data from the ATO_ATO_System table and place it above the logical application chunk / set as per shown above.
Thanks in advance..
Albert
I'm making a UWP (Windows 10) app. I'd like to know, is it possible to change the orientation of a SplitView? Typically, it's ordered like this:
______________________________________________
| | |
| | |
| | |
| | |
| | |
| Pane | Content |
| | |
| | |
| | |
| | |
| | |
----------------------------------------------
Is it possible to change the orientation to:
______________________________________________
| |
| |
| Pane |
| |
| |
| |
----------------------------------------------
| |
| |
| |
| |
| Content |
| |
| |
| |
----------------------------------------------
It is not supported by the platform (SplitVew.PanePlacement property can only be left or right).
You can likely achieve a somewhat similar affect by placing a command bar at the top of your application.
What are segments in Lucene?
What are the benefits of segments?
The Lucene index is split into smaller chunks called segments. Each segment is its own index. Lucene searches all of them in sequence.
A new segment is created when a new writer is opened and when a writer commits or is closed.
The advantages of using this system are that you never have to modify the files of a segment once it is created. When you are adding new documents in your index, they are added to the next segment. Previous segments are never modified.
Deleting a document is done by simply indicating in a file which document of a segment is deleted, but physically, the document always stays in the segment. Documents in Lucene aren't really updated. What happens is that the previous version of the document is marked as deleted in its original segment and the new version of the document is added to the current segment. This minimizes the chances of corrupting an index by constantly having to modify its content when there are changes. It also allows for easy backup and synchronization of the index across different machines.
However, at some point, Lucene may decide to merge some segments. This operation can also be triggered with an optimize.
A segment is very simply a section of the index. The idea is that you can add documents to the index that's currently being served by creating a new segment with only new documents in it. This way, you don't have to go to the expensive trouble of rebuilding your entire index frequently in order to add new documents to the index.
The segment benefits have been answered already by others. I will include an ascii diagram of a Lucene Index.
Lucene Segment
A Lucene segment is part of an Index. Each segment is composed of several index files. If you look inside any of these files, you will see that it holds 1 or more Lucene documents.
+- Index 5 ------------------------------------------+
| |
| +- Segment _0 ---------------------------------+ |
| | | |
| | +- file 1 -------------------------------+ | |
| | | | | |
| | | +- L.Doc1-+ +- L.Doc2-+ +- L.Doc3-+ | | |
| | | | | | | | | | | |
| | | | field 1 | | field 1 | | field 1 | | | |
| | | | field 2 | | field 2 | | field 2 | | | |
| | | | field 3 | | field 3 | | field 3 | | | |
| | | | | | | | | | | |
| | | +---------+ +---------+ +---------+ | | |
| | | | | |
| | +----------------------------------------+ | |
| | | |
| | | |
| | +- file 2 -------------------------------+ | |
| | | | | |
| | | +- L.Doc4-+ +- L.Doc5-+ +- L.Doc6-+ | | |
| | | | | | | | | | | |
| | | | field 1 | | field 1 | | field 1 | | | |
| | | | field 2 | | field 2 | | field 2 | | | |
| | | | field 3 | | field 3 | | field 3 | | | |
| | | | | | | | | | | |
| | | +---------+ +---------+ +---------+ | | |
| | | | | |
| | +----------------------------------------+ | |
| | | |
| +----------------------------------------------+ |
| |
| +- Segment _1 (optional) ----------------------+ |
| | | |
| +----------------------------------------------+ |
+----------------------------------------------------+
Reference
Lucene in Action Second Edition - July 2010 - Manning Publication