I am currently working on a side-project with the code-first approach. I really like how the framework does migrations and you can use the .sql files in the migration folder etc.

But I am still wrapping my head around the model/poco classes I want to use and the data structure is still changing in this stage of development. I have moved from 1-n relations to n-n etc.

I sometimes stumbled on a situation where there were foreign key constraints which made it (almost) impossible to process a migration. I also did not found how to revert one or cancel one if you have a pending one and cannot update the database.

Because I was still starting with the project I could not bother to loose all the data and start over with a clean/empty database. I have read several blogs about how to do this, but I wanted to blog my situation. Perhaps it’s useful for someone, perhaps it’s just an online note to myself.

Here is how I did it:

  1. Delete the ‘__MigrationHistory’ table
  2. Delete all files in the ‘Migrations’ folder of your project except the ‘Configuration.cs’
  3. Delete all other tables (one by one) in the DB
  4. In the package manager console, select the correct project and ‘Add-Migration <name>’
  5. Update-Database (this will recreate all tables and use the initial data from the configuration file, if you had any)

A lot of blogposts are based on keeping your data. In that case, you should skip step 3 (obvious) and comment out the ‘Up’ method when you have added your migration and have the generated file open.

This is a slightly modified version of this blogpost which uses the brand new SQL Operations Studio 0.23.6


Good luck!

Pin on pinterest Plus on Googleplus Post on LinkedIn
0 Comments

The documentation for Azure SQL db performance tuning is great and accurate with great detail on docs.microsoft.com (instead of the old msdn location)

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-query-performance

I was looking at a SQL db in Azure and noticed some spikes. Here is my graph:

snip_20170215133957

I could drill into the Azure portal to see what was causing this spike and recognized the SQL statement, so I knew which product to update the SQL for. However this was not even necessary.

Here are the client stats when I copy pasted the query causing the spikes in SQL Management Studio Express. Check out the “Total execution time”

snip_20170215120138

My next step was to take a look at the execution plan of the query. There is this exclamation mark at the sort operation:

snip_20170215120257

When you hover it, you get this context popup:

snip_20170215124310

So it used tempdb. I still had no clue how to fix this, so I reached out to stack exchange. And a user named T.H. gave me the solution to create two rather simple indexes:

CREATE NONCLUSTERED INDEX TEST ON STOCKDEBUGTRIGGERED (ChangeDate)

CREATE NONCLUSTERED INDEX TEST ON STOCKDEBUG(ProductID, StockOld, StockNew)

Here is the query plan after the two new indexes:

snip_20170215124852

No more yellow exclamation mark! and fewer steps. The client statistics also prove that this is a lot faster/better:

snip_20170215124929

From an average of 4250 down to 50!

 

This is also backed by the dramatic drop in resource usage in the Azure Portal:

snip_20170215133957

So the lesson is: do not trust the Azure db perf advisor a 100% Smile

T.H. commented on Stack Exchange:

The automatic index advice is extremely limited, and often misleading, so can only be considered as a starting point.

 

Hope this info might help someone troubleshooting Azure Db perf!

 

Good luck!

Pin on pinterest Plus on Googleplus Post on LinkedIn
0 Comments
  •   Posted in: 
  • SQL

latlngHaving a store locator on my to do list, I started months back with storing the latitude and longitude for the stores in a database. I forked this repository in 2012 https://github.com/bjorn2404/jQuery-Store-Locator-Plugin in order to have something like a store locator on our website. The downside to that store locator is that it is all client side and puts a copy of all your customer data on the internet. It is great data for your competition. So to disclose as little as possible and still serve your valued customers, I decided to do the calculation on the backend. Closest to the data. So that would be on the SQL server.

SQL Server has datatypes like points and geometry since 2008. With this SQL statement you can get the distance between my location

lat 51.69917 lng 5.30417 and the locations in the stores table.
SELECT top 10 StoreID
      ,StoreName
      ,StoreLatitude
      ,StoreLongitude
      ,StoreCity
      ,round((geography::Point(StoreLatitude, StoreLongitude, 4326).STDistance(geography::Point(51.69917, 5.30417, 4326))) / 1000, 1) as km
FROM STORES
  where StoreLongitude < 150 and StoreLongitude > -150 and StoreLatitude < 90 and StoreLatitude > -90
order by (geography::Point(StoreLatitude, StoreLongitude, 4326).STDistance(geography::Point(51.69917, 5.30417, 4326)))

The third parameter for the point data type is 4326 which is an enumeration:

And about 4326 SRID. It also called WSG84(http://en.wikipedia.org/wiki/World_Geodetic_System) system. It's the most common system to represent point on spherical(not flat) earth. It uses degree,minute,second notation and it's x and y coordinates are usually called latitude and longitude.

source http://stackoverflow.com/a/405256/169714

SRID (Spatial Reference System Identifier) : https://en.wikipedia.org/wiki/SRID

This post is more of a reference for me to lookup how to calculate distances on the Microsoft SQL server in SQL. But maybe it helps others too.

Good luck!

Pin on pinterest Plus on Googleplus Post on LinkedIn
1 Comments
  •   Posted in: 
  • SQL

If you add a new table and mark the ID column as primary key, but you forgot to set the int or bigint to auto increment, you can not fix this in the SQL Management Studio.

image

There are several ways to do this, I ended up removing the primary key, dropping the column and re-creating it.

Dropping the column also gave me the error about dropping the whole table and re-creating it. So I had to do this with some SQL, so no GUI today but just these two small SQL statements. I hope it helps, otherwise it is just an online backup for when I forget how I fixed this.

alter table YOURTABLENAME add YOURCOLUMNNAME bigint IDENTITY(1,1) primary key
alter table YOURTABLENAME drop column YOURCOLUMNNAME 

Good luck!

 

ps. when you use select @@version you get to see what version of SQL server you are using. I had

Microsoft SQL Server 2008 (SP3) - 10.0.5520.0 (X64)
    Jul 11 2014 16:11:50
    Copyright (c) 1988-2008 Microsoft Corporation
    Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (VM)
This blogpost http://sqlserverbuilds.blogspot.nl/ has a nice list of different versions.

Pin on pinterest Plus on Googleplus Post on LinkedIn
0 Comments

Lazy productive I as a developer am, I did not want to write a stored procedure. So I ended up with a lot of SQL statements which needed to be combined and executed only if the previous one succeeds.

However there was a timeout during the execution. It was strange because when I copy pasted the SQL statement in SMSS it worked perfectly and fast too. My mistake was to use a new SqlCommand object which was not part of the transaction. I needed to re-use the same sqlcommand. Here is some working code:

using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["myConString"].ToString()))
{
    con.Open();

    SqlCommand cmd = con.CreateCommand();
    SqlTransaction transaction = con.BeginTransaction();

    cmd.Connection = con;
    cmd.Transaction = transaction;

    try{
    
        cmd.CommandText = "SELECT productcode FROM products WHERE productid = " + productid;
        string code = cmd.ExecuteScalar().ToString();
    
        cmd.CommandText = "SELECT ProductID, ProductWeight, ProductEuroPrice, OrderID FROM ORDERPICKINGPRODUCTS WHERE orderid = " + orderid + " AND productid = " + productid;
        SqlDataReader sdr = cmd.ExecuteReader();
        DataTable dt = new DataTable();
        dt.Load(sdr);
    
        cmd.CommandText = "DELETE FROM ORDERPICKINGPRODUCTS WHERE orderid = " + orderid + " AND productid = " + productid;
        cmd.ExecuteNonQuery();
        transaction.Commit();
    }
    catch(Exception e)
    {
        transaction.Rollback();
    }
}
I know that it is better to use parameters https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.parameters%28v=vs.110%29.aspx

And the AddWithValue method… But this is just pseudo code and not in production. So don’t worry!

This is just a small sample to help people with a strange timeout which you can encounter if you use an sqlcommand outside the transaction.

 

Good luck!

Pin on pinterest Plus on Googleplus Post on LinkedIn