I am new to C# and am using asp.net core 2.0 on my mac . I have a simple code that gets data from the database and just output it to the browser . I am doing stress testing and simulating how the systems handle 100 requests. The first request takes 2.5 seconds and the 100th request takes a whopping 13 seconds. I know that it is my code because I have the same exact query done in Golang to the same database and the first request takes 47 ms and the 100th request takes 14 ms . The SQL code just returns Json in string format from the database which is postgres . I am trying this now that way I don't get any surprises later on . The database is on my computer so everything should return right away . I am using the package NPGSQL version 3.2.5 and got my code running from following this example http://www.npgsql.org/doc/index.html .
public string getData() { string results = ""; string connection = "connection to database"; using (var conn = new NpgsqlConnection(connection)) { conn.Open(); using (var cmd = new NpgsqlCommand("select json_build_object('Locations', array_to_json(array_agg(t))) from (SELECT latitudes,county,longitudes," +"statelong,thirtylatmin,thirtylatmax,thirtylonmin,thirtylonmax,city" +" FROM zips where city='Chicago' ORDER BY city limit 5) t", conn)) using (var reader = cmd.ExecuteReader()) while (reader.Read()) results = reader.GetString(0); } return results; }
I also did the test and made everything asynchronous which improved the first request time to 712 ms but the 100th request took even longer 17 seconds .
public async Task<string> B() { string results = ""; string connection = "my connection"; using (var conn = new NpgsqlConnection(connection)) { await conn.OpenAsync(); using (var cmd = new NpgsqlCommand("select json_build_object('Locations', array_to_json(array_agg(t))) from (SELECT latitudes,county,longitudes," +"statelong,thirtylatmin,thirtylatmax,thirtylonmin,thirtylonmax,city" +" FROM zips where city='Chicago' ORDER BY city limit 5) t", conn)) using (var reader = await cmd.ExecuteReaderAsync()) while (await reader.ReadAsync()) results = reader.GetString(0); } return results; }
The images of the test can be found here essentially the results are
.net core 2.0 Sync: 1st request = 2.5 seconds : 100th request = 13 seconds
.net core 2.0 Async: 1st request = 717 milli-seconds : 100th request = 17 seconds
Golang: 1st request = 47 milliseconds : 100th request = 14 milliseconds
I'm either missing something with optimization or maybe the library NPGSQL has some big issues . I do not know why there is such a big difference in request times between the Go version and the .net core version . I would like to use the .net core version in production however the performance issue would not allow it . Any suggestions for making this faster ? Again I am using .net core version 2.0 on Mac and the postgres database . I have not changed anything in .net core as far as settings go . Also I don't know if it is relevant but I am using localhost:5002
{"iisSettings": {"windowsAuthentication": false,"anonymousAuthentication": true,"iisExpress": {"applicationUrl": "http://localhost:55556/","sslPort": 0 } },"profiles": {"IIS Express": {"commandName": "IISExpress","launchBrowser": true,"environmentVariables": {"ASPNETCORE_ENVIRONMENT": "Development" } },"Yisly": {"commandName": "Project","launchBrowser": true,"environmentVariables": {"ASPNETCORE_ENVIRONMENT": "Development" },"applicationUrl": "http://localhost:5002/" } } }