Google cloud, BigQuery assets table, os_inventory.update_time - sql

We have Inventory setup for GCP compute engines (VMs) and run export commands (daily) to create project level assets tables (for each project) in a BigQuery under my-monitoring project.
Then we can query VMs, let say installed packages. A query example, to see if the package "xxx" does exists for the VMs deployed at the test-project.
SELECT pack.value.installed_package.apt_package, os_inventory.update_time FROM `my-monitoring.InventoryLogs.test-project_compute_googleapis_com_Instance`,
unnest (os_inventory.items) as pack
where pack.value.installed_package.apt_package.package_name = 'xxx'
The problem is, it always shows as the package xxx does exist, if it was ever installed. If I remove the package later on, still this command shows existence. An output something like this:
As I understand, it shows an old logs. I looked at some other VMs, the value of os_inventory.update_time is very old. Does anyone know what this value is for and when it refreshes? I was expecting it gets updated every time we run export assets command. Any ideas how to query for the latest packages values at inventory table?
Or even any other solutions how to query for a particular package existence for all VMs across all projects?

Related

dbt : Database Error Insufficient Permission

In my current dbt project, I run everything on the same Google cloud project (let's say project : dataA). Since datasets becomes a lot, I decide to split the project into 2: The current project for import of raw data and a new project (for example : dataB) for production environment where I stock all data marts.
I use a service account to manage the lecture or editing data sources for both two projects. And I am sure that there is no issues on rights. The profile setting is quite similar to my current settings which work fine.
But I am experiencing some Database Error issues from dbt say that I don't have Insufficient Permission.
Does anyone have an idea about the reason of the issue? And how to fix it?
Many thanks!

U-SQL : How to merge two usql files with same import statement

I want to deploy multiple tables creation script as one adla job to save on cost. I am using packages to get set of defined partition keys for all tables. When i try to deploy as merged script it complains that import statement is declared multiple times and fails.
While i can still deploy script one by one but wanted to see if we can merge script for faster deployment.
Thanks
Amit
I am not sure I completely get your scenario. If you want to deploy a single object by itself, then that file needs to include all the dependencies (e.g., your package). If you want to deploy several objects, you should include the dependencies only once.
You probably should set up something that generates your script from the underlying "fragments". One fragment would be the reference to the package, the other fragments would be the creation of one object. And your deployment system would concatenate the files as needed.

Upgrade nopcommerce 2.8 to 3.10

Hello,
I am new in NopCommerce. I have change in Nop.Core, Nop.Data and Nop.Services. I have change also in some controller, Model and view of Nop.web.
If i wish to upgrade nopcommerce version from 2.8 to 3.10 then, which way is easy and best.
1) I backup my file and get update. Once update is finished then, may i replace only those part which i have updated and differ from original code? May i add new method which is in my backup file but not in original code?
2) Or May i have to create new plugin or other way.
[For example: I have change in product table and add new fields like size, age, color.]
Please let me know your valuable feedback.
Thanks
There is no straight right or wrong answer. I am suggesting on the approach i took. Assuming you have code changes and database changes on top of base nop 2.80.
Ground Work
Write down a detailed modifications list. (Additional functions you have added on top of 2.80.)
Check with 3.10 if any of your modification is supported out of the box.
My modification count was 250 (very detailed up to estimation).
Approach
Upgrade 2.80 db to 3.10 db.
Modify 3.10 code to support new features of 2.80.
DB Upgrade
Find a good database diff tool. ex: SQL Compare.
Restore your production (2.80) DB to your dev pc and install nop 3.10 db into your dev pc as well.
Compare both DB table by table. Basically, you are going to upgrade 2.80 db to 3.10 db by comparing 3.10 schema.
Alter/Delete/Add new columns in 2.80 by comparing 3.10.
Create Store information (Store table). This is new feature in 3.10 and StoreID is needed for most other tables.
Update customer data to match 3.10 schema.
Update products information. ProductVariant table is now merged with Product table. So need to update product table.
Update Order details. OrderVariant is now OrderItem. So move the data.
Move other tables.
I used to create single SQL Script which,
Restores Production DB from a backup file.
Script block for each table which, upgrades each tables and populates data.
This gives you flexibility of run and run and again run the script if there is any error or even this is helpful during scripting.
In addition to this, if you are merging 2 or more stores in to one,
Add all store information in step 5.
Now create a separate script for each store from this point.
You need to find different sequence number for OrderId & Customer id. Can't be same.
When you add 2nd or more store, check for existing customer before adding.
Check 01
Now take a fresh 3.10 code base and run against your migrated db. All should work well if you have done migration properly.
Code Upgrade
There is significant changes to be done on code simple because there is noProductVariant table. So all the custom logic needs to be re written.
Main issue is, invoicing. If you have more than one store, there is no email setting per store basis. So have to custom modify that too.
A good approach would be,
Do all the customer side eCommerce fist.
Then do the admin side.
If customer and admin in same functionality, do together. example, custom modification on order placing work flow.
There will not be big modification needed for plugins.
Check 02
Run the migrated DB with Updated 3.10 code base. All should work.
On Big Day
Backup Production DB and Production Code base.
Run the Upgrade scripts and Replace new code base.
No 3rd Step, since you have done all the hard work before this.
Ok, if you screw up, then roll back.
Things to Note
I learned these by testing. thank god, i found them before actual migration.
There is no detailed instructions at the time we were migrating on how to setup a complete multi-store solution in nop commerce side. There is a instruction here on how to setup nop commerce in production server. but i is not covering all the aspects.
We were using VPS Server to host our platform. If you are using VPS, please beware that SNI is need to be used if you set up multi-store properly. Only IIS 8 and above supports SNI. Which means you need Windows 2012 Server. See here and here for more on SNI
We were using Pleask to manage the server. So set up master domain as primary and all other stores as alias. In IIS side, RDP in to VPS and Set up SSL for each domain using SNI feature of IIS8
Down side of SNI, it is not supported by all old browsers. See here.
Limitations
If you are using Pleask, then email wont work very well. Since email box will be created only for master domains and all other alias will share the same email accounts. So you can send a reply from alias email. unfortunately, its out of nop commerce development scope.
i haven't found a solution for this. working on this.
I'd recommend doing the database incrementally. According to the upgrade guide, you must apply the upgrade scripts one at a time, just read through the guide and have at it.

Problems with BigQuery and Cloud SQL in same project

So, we have this one project which uses Cloud Storage and BigQuery as services. All has been well.
Then, I wanted to add Cloud SQL to this project to try it out. It asked for a unique Project ID so I gave it one. (The Project ID is different than the Project Number.)
Ever since then, I've been having a difficult time accessing my BigQuery tables. When I go to the BigQuery web interface, the URL contains the Project ID instead of the original Project Number. It shows the list of datasets, but now shows the Project Number before each dataset name and the datasets are greyed out and inaccessible. If I manually change the URL to contain the Project Number instead of the Project ID, it appears to work although it shows the list of datasets in the left nav twice, one set greyed out and inaccessible and the other set seemingly accessible.
At the same time, some code that I've been successfully using in Apps Script that accesses BigQuery is now regularly failing with a generic "We're sorry, a server error occurred. Please wait a bit and try again." I'm not sure if this is related to the Project ID/Project Number confusion, or if it's just a Red Herring.
Since we actively use the Cloud Storage service of this project, I am trying to be cautious with further experimentation with this project. I'm not sure if I should delete the Cloud SQL service in this project to get it back to the way it was, or if this is a known issue with some back-end solution. Please advise.
After setting the project id, there can be a delay where BigQuery picks up the change. It should happen within 15 minutes or so, but sometimes it takes longer.
If you send the project ID I can make sure it has been updated.

TFS2010 database size

We've been using TFS since around 2009 when we installed TFS2008. We upgraded to TFS2010 at some point and we've been using it for source control, work item management, builds etc.
Our TFSVersionControl.mdf file is 287,120,000 KB (273GB). We ran some queries and found that our tbl_BuildInformationField table is massive. It has 1,358,430,452 rows which takes up 150,988,624 KB (143GB). We have multiple active products over multiple active builds which more than one solution per build and the solutions aren't free of warning messages.
My questions:
Is it possible to stop MSBuild from spamming the
tbl_BuildInformationField table so much? I.e. only write errors and
general build information and not all the warnings for every
project?
Is there a way to purge or clean up old data from this
table?
Is 273GB for 4 years of TFS use an average size?
Is 143GB for tbl_BuildInformationField a "normal" size?
The table holds the values and output of build process. Take note that build retention policy doesnt actualy delete the build object like everything else in TFS the object is marked deleted and only public visibility and drop location is cleared.
I would suggest if you have retainened same build definitions for very long time (when build definition is deleted the related objects get removed as well) you should query for build info including deleted ones using TFS api, the same api will also alow you to remove them for good. Deleting build definitions probably will not work and will fail with timeout error.
You can consult the following:
http://blogs.msdn.com/b/adamroot/archive/2009/06/12/working-with-deleted-build-data-in-team-foundation-server-2010-beta-1.aspx