To keep kids entertained, people from all over the world put a teddy bear in their window so that kids can spot them during a walk.

I don’t know the origin. It’s in Canada, USA, New Zealand, UK etc. It’s also in my hometown so I decided to make an app for it.

People made a Facebook group with a google form to submit the teddy bear and an URL with all bears on google maps.

I thought that I had to get access to the Google spreadsheet containing the data, but it seems that it would not hold that much info if we take a look at the entry form and the maps data. In Google maps you can download a KMZ file which is a zipped KML (Keyhole Markup Language) It’s XML. Here is the full KML:

full-kml

So there is a web link in it to get the live data. I used the webclient to pull it in and read it and tried to make pins out of it.

My first step was to add the Xamarin.Forms.Map Nuget package and the SharpKml.Core

Here is the full code:


private void AddMap()
{
	var map = new Map(MapSpan.FromCenterAndRadius(new Position(51.697815, 5.303675), Distance.FromMeters(10000)));

	using (var client = new WebClient())
	{
		var kmz = KmzFile.Open(client.OpenRead("https://www.google.com/maps/d/u/0/kml?mid=1kedGv2twtsWmzgxRpZcu5hr-qpE77plL"));
		Kml kml = kmz.GetDefaultKmlFile().Root as Kml;

		if (kml != null)
		{
			foreach (Placemark placemark in kml.Flatten().OfType())
			{
				Console.WriteLine(placemark.Name);

				var pin = new Pin()
				{
					Address = placemark.Address,
					Label = placemark.Name,
					Type = PinType.Place
				};
				map.Pins.Add(pin);
			}
		}
	}

	this.Content = map;
}

But the pins won’t show up (of course). Because I did not set the position. The real Kml contains data like this:

kml-detail

So as you can see there is no Latitude Longitude for the placemarks… So how does google maps work then? It seems that both Google Maps and Google Earth Geocode the address to get the lat lng. But that service unfortunately is not free. I tried to load the kml in Google Earth and export it, but that also does not add the latitude and longitude. I also thought that kml support for the map control could be used. So that I could just provide the kml to the map and have it sort it out, but that was build with monoandroid 9 instead of netstandard2.0 and would probably not fix the geocode issue.

I planned to make an app with no central backend. But because of geocoding, I would have to use a webapi or Azure Function to keep track of the “database” with all teddy bears and their corresponding lat lng. That would also lower the requests for geocoding if it would be moved from the phone (client) to the server.

But I’ve not found a good free geocoder

nominatim.openstreetmap.org does not work if I would provide the kml data. I think that I will come back to this one….


Good luck making your own KML/KMZ reading app!

Pin on pinterest Plus on Googleplus Post on LinkedIn
0 Comments

There are a lot of guides on how to install Pi-hole on a pi. I just used the win32DiskImager to put Rasbian on it and I did put an ssh.txt file on the SD card to enable SSH. I just had to plug it in the pi and wait for it to show up on the DHCP list of my router so that I knew the IP to ssh to.

I have an archer c7 router which has “DNS rebind protection” so I could not change the DNS to the local IP address.

In order to fix it I had to go to the Archer web interface and follow these steps:

Add in dhcp –> dhcp settings, set the primary DNS to the local ip

in the dhcp –> address reservation “add new “ paste the mac address which you can find in dhcp –> dhcp clients list.

and the local ip which the pi has. This makes sure that the next reboot, the pi/dns server will have the same lan IP.


By the way, I had an issue with the ‘ftl’ part in the installation of pi-hole

I fixed it with : `sudo nano /etc/resolv.conf`

add a line with: `nameserver 8.8.8.8`

That is the google dns and that makes sure that there is a know dns so that the installation can lookup ip’s and continue.

Save the .conf file and run `sudo bash /etc/.pihole/pihole –r`


That will repair the installation and the FTL part will succeed now that there is an dns entry to the google dns in the resolv.conf


Good luck!

ps. please let me know if you know a way to block youtube ads. No browser plugin, but a pi-hole solution please.

Pin on pinterest Plus on Googleplus Post on LinkedIn
0 Comments

When you have a nice .Net core solution and want to see the code smell and technical debt, you can analyze it with SonarCube

SonarQube

I started by browsing to the docker hub and used a container:

docker pull sonarqube

docker run -d --name sonarqube -p 9000:9000 sonarqube

The default username is ‘admin’ and the default password is ‘admin’ so once it is started you can head over to http://localhost:9000 and login. Configure your project there and copy the key/hash

You can get the SonarLint extension for Visual Studio and Visual Studio Code and link it to the local SonarCube server.

You need this one time installation of a global tool:

dotnet tool install --global dotnet-sonarscanner --version 4.3.1

And then:

dotnet sonarscanner begin /k:"project-key" 
dotnet build <path to solution.sln>
dotnet sonarscanner end 

Wait a minute after it finishes so that the SonarCube server has some time to process the results. Check the dashboard again to see the smell, bugs and tech debt. This does help you verify if you are still coding SOLID.


Happy coding!

Pin on pinterest Plus on Googleplus Post on LinkedIn

As you have noticed, you need Visual Studio Enterprise for live unit testing, or Jetbrains Rider, or some Visual Studio Code “hacks”. Here is a method to have coverage of .Net core with a global tool:

Daniel Palme has a global tool version of Report Generator. You should install it once with:

dotnet tool install -g dotnet-reportgenerator-globaltool

dotnet tool install dotnet-reportgenerator-globaltool --tool-path tools

You can then run it with `reportgenerator` so after building I run:

dotnet test --filter FullyQualifiedName~UnitTests /p:CollectCoverage=true /p:CoverletOutputFormat=opencover /p:Exclude="[*Test*]" /p:ExcludeByAttribute="GeneratedCodeAttribute"
reportgenerator "-reports:**\coverage.opencover.xml" "-targetdir:C:\Temp\Reports\" "-reporttypes:HTML"
Start-Process -FilePath "C:\Temp\Reports\index.htm"

Of course you can go to the project properties and add the three lines of powershell to a file in the root of your solution and add to the build events tab as post-build:

Powershell -File "$(SolutionDir)nameOfPowershellscript.ps1"

Good luck!

Pin on pinterest Plus on Googleplus Post on LinkedIn

Recently the Raspberry Pi 4 was announced, But I am currently using my rpi 3test_1 and want to run Rabbit MQ on it in Docker. So I used these two commands to get it to work and I just wanted to share it:


sudo rm /etc/apt/sources.list.d/docker.list;

curl –sL get.docker.com | sed ‘s/9)/10)’ | sh

If you would like to use Docker as a non-root user you should add your user to the docker group:

sudo usermod –aG docker pi

To get Rabbit MQ (which has arm container) on the pi with a management web interface run:

sudo docker run –d –hostname my-rabbit –name some-rabbit –p 15672:15672 –p 5672:5672 rabbitmq:3-management

Then get the ip of the docker container with (but since you added the ports in the previous command, this step can be skipped):

sudo docker inspect –f ‘{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ some-rabbit

Then you can launch a browser and go to http://thatipaddress:15672 and login with 'guest/guest'. If you did not lookup the ip of the container you can use the ip of the pi because you opened container ports when running it.


8850828555_df7c7bd300_b


Good luck!

Pin on pinterest Plus on Googleplus Post on LinkedIn