variable type of Bit - sql

I am getting values from a database table and saving them in their corresponding variable types.
I have a doubt regarding when I get BIT type data from a database(i.e True or False),
what type of datatype should I use to save it in .
E.g
public string Industry { get; set; }
public bool CO2e { get; set; }
public int ID { get; set; }
Here Industry and ID are string and int type respectivly. But the ISEFCO2e is a variable i am using for BIT type of data coming from the table. So using bool with it would be correct?

Yes, that is correct. See here: The string values TRUE and FALSE can be converted to bit values: TRUE is converted to 1 and FALSE is converted to 0.
Note that a bit may only hold 1 or 0. This is all you need to represent a boolean in a persistent way. Note thay for SQL-server in particular, the database will return "true" and "false", literally.

A bit just has two values which are 0/1 so a bool is the perfect match for such a value.

Related

Golang - GORM how to create a model for an existing table?

According to the source from this link : https://gorm.io/docs/index.html
To declaring a model, we do this:
type User struct {
ID uint
Name string
Email *string
Age uint8
Birthday *time.Time
MemberNumber sql.NullString
ActivatedAt sql.NullTime
CreatedAt time.Time
UpdatedAt time.Time
}
Then run migration to create it on the database.
However, I could not find any document mentioning about declaring a model that already exist in database. I suppose there is something like this:
type User struct {
ID uint
Name string
This_struct(User).belongs_to_an_existing_table_named("a_table_name") -- this is an example to explaning what I mean
If and only if we do it by declaring a struct with the same name with an existing table. Can I change the name for simplicity in my code ?
Simply implement the Tabler interface as specified in the docs. Like this:
func (User) TableName() string {
return "a_table_name"
}

Flatbuffers: How to allow multiple types for a single field

I'm writing a communication protocol schema for a list of parameters which can be of multiple values: uint64, float64, string or bool.
How can I set a table field to a union of multiple primitive scalar & non-scalar primitive type?
I've already tried using a union of those types, but I end up with the following error when building:
$ schemas/foobar.fbs:28: 0: error: type referenced but not defined
(check namespace): uint64, originally at: schemas/request.fbs:5
Here's the schema in its current state:
namespace Foobar;
enum RequestCode : uint16 { Noop, Get, Set, BulkGet, BulkSet }
union ParameterValue { uint64, float64, bool, string }
table Parameter {
name:string;
value:ParameterValue;
unit:string;
}
table Request {
code:RequestCode = Noop;
payload:[Parameter];
}
table Result {
request:Request;
success:bool = true;
payload:[Parameter];
}
The end result I'm looking for is the Request and Result tables to contain a list of parameters, where a parameter contains a name and value, and optionally the units.
Thx in advance!
Post-answer solution:
Here's what I came up with in the end, thx to Aardappel.
namespace foobar;
enum RequestCode : uint16 { Noop, Get, Set, BulkGet, BulkSet }
union ValueType { UnsignedInteger, SignedInteger, RealNumber, Boolean, Text }
table UnsignedInteger {
value:uint64 = 0;
}
table SignedInteger {
value:int64 = 0;
}
table RealNumber {
value:float64 = 0.0;
}
table Boolean {
value:bool = false;
}
table Text {
value:string (required);
}
table Parameter {
name:string (required);
valueType:ValueType;
unit:string;
}
table Request {
code:RequestCode = Noop;
payload:[Parameter];
}
table Result {
request:Request (required);
success:bool = true;
payload:[Parameter];
}
You currently can't put scalars directly in a union, so you'd have to wrap these in a table or a struct, where struct would likely be the most efficient, e.g.
struct UInt64 { u:uint64 }
union ParameterValue { UInt64, Float64, Bool, string }
This is because a union must be uniformly the same size, so it only allows types to which you can have an offset.
Generally though, FlatBuffers is a strongly typed system, and the schema you are creating here is undoing that by emulating dynamically typed data, since your data is essentially a list of (string, any type) pairs. You may be better off with a system designed for this particular use case, such as FlexBuffers (https://google.github.io/flatbuffers/flexbuffers.html, currently only C++) which explicitly has a map type that is all string -> any type pairs.
Of course, even better is to not store data so generically, but instead make a new schema for each type of request and response you have, and make parameter names into fields, rather than serialized data. This is by far the most efficient, and type safe.

Set a default value in constructor

Ok. So I know this should be easy to do, I simply want to set a default value in the following:
uint8 public gasPriceLimit; //Gas Price Limit
//Constructor
constructor(string _name, string _symbol, uint8 _decimals) public {
name = _name;
symbol = _symbol;
decimals = _decimals;
uint8 gasPriceLimit = 999;
}
However, doing this I get the following error when compiling:
Type int_const 999 is not implicitly convertible to expected type uint8.
I also tried setting in the declaration itself without luck.
Cheers
Ok. A mystery to me, but if I change to use uint instead of uint8 then the following works fine:
uint public gasPriceLimit = 999;

Loop over SQL commands in a file

I have an SQL file that looks like this (clearly the real thing is a bit longer and actualy does stuff :))
DECLARE #Mandatory int = 0
DECLARE #Fish int = 3
DECLARE #InitialPriceID int
if #Mandatory= 0
begin
select #InitialPriceID = priceID from Fishes where FishID = #Fish
end
I have a file of 'Mandatory' and 'Fish' values
Mandatory,Fish
1,3
0,4
1,4
1,3
1,7
I need to write a program that will produce an SQL file (or files) for our DBO to run against the database. but I am not quite sure how to approach the problem...
Cheers
You should generally prefer set based solutions. I've no idea what the full solution would look like, but from the start you've given:
declare #Values table (Mandatory int,Fish int)
insert into #Values(Mandatory,Fish) values
(1,3),
(0,4),
(1,4),
(1,3),
(1,7),
;with Prices as (
select
Mandatory,
Fish,
CASE
WHEN Mandatory = 0 THEN f.PriceID
ELSE 55 /* Calculation for Mandatory = 1? */
END as InitialPriceID
from
#Values v
left join /* Or inner join? */
Fishes f
on
v.Fish = f.Fish
) select * from Prices
You should aim to compute all of the results in one go, rather than trying to "loop through" each calculation. SQL works better this way.
At the risk of over-simplifying things in C# or similar you could use a string processing approach:
class Program
{
static void Main(string[] args)
{
var sb = new StringBuilder();
foreach(var line in File.ReadLines(#"c:\myfile.csv"))
{
string[] values = line.Split(',');
int mandatory = Int32.Parse(values[0]);
int fish = Int32.Parse(values[1]);
sb.AppendLine(new Foo(mandatory, fish).ToString());
}
File.WriteAllText("#c:\myfile.sql", sb.ToString());
}
private sealed class Foo
{
public Foo(int mandatory, int fish)
{
this.Mandatory = mandatory;
this.Fish = fish;
}
public int Mandatory { get; private set; }
public int Fish { get; set; }
public override string ToString()
{
return String.Format(#"DECLARE #Mandatory int = {0}
DECLARE #Fish int = {1}
DECLARE #InitialPriceID int
if #Mandatory=
begin
select #InitialPriceID = priceID from Fishes where FishID = #Fish
end
", this.Mandatory, this.Fish);
}
}
}
There are many article on how to read from a text file through t-sql, check "Stored Procedure to Open and Read a text file" on SO and if you can make change the format of you input files into xml, then you can check SQL SERVER – Simple Example of Reading XML File Using T-SQL

NHibernate: Wrong column type: found float, expected double precision

I have a domain entity class with a property:
public virtual double? Result { get; set; }
The property is being mapped using the NHibernate 3.2 mapping-by-code stuff:
public class SampleResultMap : ClassMapping<SampleResult>
{
public SampleResultMap()
{
Id(c => c.Id,
map => map.Generator(Generators.Identity));
Property(c => c.Result, map =>
{
map.NotNullable(false);
});
// More properties, etc.
}
}
This works fine and the SQL Server 2008 R2 table is created properly with a data type of float.
However, the SchemaValidator.Validate call gives this error:
NHibernate.HibernateException was unhandled
Wrong column type in Foo.dbo.SampleResult for column Result.
Found: float, Expected DOUBLE PRECISION
Looking at the SQL that the call to SchemaExport.Create generates there is this definition for the table:
create table SampleResult (
Id INT IDENTITY NOT NULL,
DateEnteredUtc DATETIME not null,
ElementId INT not null,
Unit INT not null,
ResultText NVARCHAR(50) null,
[Result] DOUBLE PRECISION null,
Detected BIT not null,
Qualifier NVARCHAR(10) null,
SampleId INT not null,
Deleted BIT not null,
primary key (Id)
)
From a quick reading of the NHibernate 3.2 sources it appears that the validator is comparing “DOUBLE PRECISION” to “float”.
Has anyone else seen this? I assume it is a bug but I haven't used the validator before so wanted to find out if I’m doing something wrong.
I had a similar issue with natively generated IDs on a SQLite DB which was caused because SQLite only supports auto increment on integer columns.
NHibernate had correctly created the ID column as an integer but when it was validating it, it thought it should be an int instead of an integer. Because int just maps to integer on SQLite I just created a new dialect to use integer instead of int and it now works fine.
As DOUBLE PRECISION is the same as FLOAT(53) on SQLServer it might be the same thing, it's seeing float instead of double precision. As a work around you could try changing the dialect to use FLOAT(53) instead:
using System.Data;
using NHibernate.Dialect;
public class UpdatedMsSql2008Dialect : MsSql2008Dialect
{
public UpdatedMsSql2008Dialect()
: base()
{
RegisterColumnType(DbType.Double, "FLOAT(53)");
}
}
The SQLServer dialect source seems to suggest it should work. Definitely looks like there is a bug with the validator though.