Xiaomi MiBand3 and privacy… not much

MiBand is a wearable activity tracker produced by Xiaomi. The 3rd generation has a reasonable feature set and it is 2-3 times cheaper than its competitors. It’s a great entry-level band if you want to begin tracking your fitness levels.

I bought a Xiaomi MiBand3 because I was hoping to make sense of my erratic sleeping patterns. I was also eager to discover how much movement I was getting through the day. Additionally, I was hoping to connect with some faraway friends via the MiFit phone app.

“But what about privacy?” a nagging voice in my head kept asking. The data that a fitness tracker generates feels personal and Xiaomi is a Chinese company. I don’t want the Chinese government to have information about me. It’s bad enough that Chinese citizens have no privacy. To make an informed purchase, I searched for articles about “Xiaomi privacy”. I found two articles (1 and 2) reviewing the InfoSec aspects of a couple of fitness trackers. Yet, nothing that answered my question about Xiaomi. In the end, I decided to buy the band, but I felt uneasy every time the MiFit app “synchronized” with the band. Uneasy, but not concerned enough to look deeper into the matter.

Susana Sanz, from BalkonTactics, renewed my interest in privacy and InfoSec. After our talk, my unease changed into serious concern about the data generated by my shiny, new fitness tracker. The ethical/moral/philosophical aspect of privacy did not interest me, that’s something that others have talked in more detail (Glenn Greenwald and Edward Snowden come to mind).

I was interested to know what exactly does Xiaomi know about me. Making a Subject Access Request (SAR), a right recognized by the GDPR, is one way to go about it. For the sake of learning how SAR work, I submitted one to privacy@xiaomi.com. But this process would take a long time and I don’t like to wait. While waiting for an official response, why not use a “hacky”, DIY approach to get an answer?

Nowadays, almost all phone apps communicate with a server on the internet. It is possible to intercept that communication, while it’s happening (see the technical method section at the bottom of the post). Thirty minutes later, my concern was confirmed: Xiaomi collects all the data that MiBand generates. To be more specific, all the data that is shown in the MiFit app (e.g. sleep data, training data, heart rate, etc.) gets uploaded to the Xiaomi servers. There is no way to disable the upload in the configuration of the application. The band keeps a backlog of all the recorded data and everything gets uploaded to Xiaomi when you use the mobile app. If you don’t use the MiFit app, you’ll end up with a band that only knows how to count your steps and how to measure your heart rate.

The only good news here is that, at this moment, the geo-location data is not collected passively (outside training sessions). But there is no guarantee that it will stay like this in the future.

This is a lot to take in. In the end, I’m left with a feeling of disappointment that Xiaomi has unrestricted access to sensitive information about my lifestyle:

  • what would happen if someone infiltrates their systems? I’m sure that there are ways in which this data could be exploited to my disadvantage.
  • what’s up with this shady user agreement? Not clear to me what Xiaomi and its partners do with the data. I understand that the moment it gets uploaded I “waive any and all ownership, legal and moral rights” to my data. But how does Xiaomi use it? Who are the third parties or Xiaomi affiliates that have access to it? Is it sold or monetized in any way? Lots of questions and no clear answers.

If you’ve been on the internet long enough, you’re familiar with the phrase “If you’re not paying for the product, you are the product”. Fun fact, in 2016, the average worth of a Facebook user was $3.73 per quarter. I’m not ok with that, but let’s leave it there for now. Thinking logically, the phrase should not stand once I start paying for the product. Right? Well, it seems that this is not the case with Xiaomi MiBand.

I’m dissatisfied that, despite paying for the MiBand, I “waive any and all” rights over the data that I generate. In this new light, what seemed to be a good deal (best cheapest fitness tracker on the market) became a lousy deal when reading the fine print and doing a bit of research.

In the end, there are two big questions left in my mind about this way of collecting personal data:

  • is this a general practice of the fitness band industry or is Xiaomi an exception?
  • are other Xiaomi products collecting data in the same way?

If you own a fitness tracker or another Xiaomi product, can you do a bit of digging around and let me know? Or get in touch with me and we can do the research together. 😉

The technical method of finding out what MiBand3 data gets uploaded to Xiaomi’s servers was simple. Had I known it would be so simple, I wouldn’t have postponed it so much.

The hardware setup: I had to install mitmproxy on my laptop.

  • connect both my phone (with the Xiaomi MiFit app) and my laptop to the same WiFi network
  • install and run mitmproxy on my laptop
  • install the mitmproxy root certificate on my phone
  • on my phone, set the proxy server for the Wifi to point to my laptop
  • open the Xiaomi MiFit app and look at the requests going into the proxy running on my laptop

Out of all the requests made by the Xiaomi MiFit app, only one sends a lot of data:

POST https://api-mifit-de.huami.com/v1/data/band_data.json?r=50AB6198-7007-47BF-86AC-53F606CDD4F6&t=1565977799279

In the payload of this request is a key called data_json that contains all the recent band data (as the name of the endpoint suggests). I took a look at the data in the JSON object and saw all the data points that are shown on all the graphs (in the app), including all the geo-coordinates of my running sessions. To my surprise, the MiFit app didn’t seem to send to Xiaomi information about my current location. Yet, this is not a guarantee that this information will remain on my phone in the future.

From a technical perspective, I enjoyed doing this little research. Using mitmproxy was straightforward and I recommend it if you want to see what communication goes on between the apps on your phone and the internet.

Database integration tests

Did you ever struggle while trying to write integration tests without mocking the third party? I know I did.

I’ve known about Martin Fowler since I was a junior developer. He is one of my favorite technology thought-leaders and I enjoy reading the thought-provoking articles that he publishes on his blog. In this post, I want to expand on his article about the test pyramid, in which he explains the difference between unit tests, integration tests and, UI tests. I’ll expand on the part about integration tests.

I want to show you an easy way to run integration tests against a database. With this goal in mind, I’ve created this java demo project. It illustrates the concept and is useful as a starting point for more complicated applications.

In the past, I would delay writing the integration tests. I dislike the idea of not having any integration tests. Fixing errors at runtime is not something that I want to do in Java. Meanwhile, I didn’t want to invest time on setting up a separate test infrastructure. And tests running against a mocked database, are not testing anything. The options seemed limited:

  • mocking the database interaction layer
  • connecting to an in-memory database
  • connecting to a local/remote test database

Regardless of the option I picked, I always had the feeling that the solution could be more elegant. My hesitation would turn into frustration as the application grew and the tests would become harder to maintain. Recently, I discovered a better option: connecting the tests to a database running in a Docker container.

In my last project, Vlad and Max showed me that using docker containers simplifies running integration tests against a database. No mocking, no complicated infrastructure, no tinkering with configuration files. As long as you can run Docker on the build machine, you can run the tests.

This is the elegant solution that I was looking for:

  • install and configure Docker on the build servers and the developer machines that will run the tests
  • use docker-maven-plugin to build a docker image of the database and start/stop a container based on the image
  • populate the database with the necessary data (use ‘/docker-entrypoint-initdb.d/’ or a database migration tool like flyway)
  • finally, use maven-failsafe-plugin to run the integration tests

I like this approach because the integration tests are portable and easy to debug. If the tests run on my laptop, they will run also on the build machine. If the tests fail on the build machine, the problem will also appear on my laptop.

I also like that the tests are repeatable: the plugin builds the Docker image from scratch before the tests start and removes it after the tests finish.

Finally, I like that this approach is transferable to other types of integration tests. It’s easy to replace the database container with a dockerized REST API (in a move towards contract testing) or with a message queue container.

The demo project is more than a typical “hello world” application. The integration test starts a docker MySQL container and makes a simple SQL query. You will notice that the docker-maven-plugin configuration is more complicated than necessary, given such a simple test. The reason: the code samples that I found online seemed trivial. I wanted to have a “bootstrap” project that I could reuse in real-life projects without the hassle of gluing everything from scratch.

This approach to writing integration tests has opened my eyes to the possibilities that exist today in the QA automation domain. The tools built on top of Docker seem varied enough to accommodate all test types. The number of plausible excuses for not having a proper CI pipeline is getting too low…

Setting up a “fresh” MacBook

Only for developers that use macOS…

Setting up a “fresh” laptop for development is always painful, be it windows, macOS or Linux.

It’s the second time in the last year when I do a fresh install of macOS on my MacBook. I’m tired of installing all the apps that I need in one-by-one fashion. I’m tired of doing all the small configurations (for macOS and for the apps). Most of the time I forget some of them and there are a couple of settings for which I have to search online.

I start off by making a list of settings and application that I use and then I discover that most of these things can be automated via the command line. I’m sure that someone did this before me and I start searching for some scripts that automate the installation of the development environment on a MacBook. I hit jackpot.

Using a couple of GitHub repositories as inspiration, I create a couple of simple scripts that should install almost everything that’s required for development on a MacBook. The scripts and the instructions are available here: https://github.com/treaz/mac-dev-setup/

I’m curious how much it will hurt the next time that I have to set up a development MacBook.

Simple (and fun) 360 feedback process

In the last year, I’ve been testing a new feedback meeting style that is surprisingly valuable. The meeting is intense, but it energizes me and gives me a team-feeling that will last for days. Additionally, I get an idea of what I need to tackle at work in the next months. I’m happy with these outcomes so I’ve written down the guidelines that I use during the meetings. Maybe you can use them at work too.

This style of 360 feedback is the simplest, most fun and most revealing version of feedback that I know of. The credit of coming up with it goes to one of my previous scrum masters: Caroline Fidelaire.

I’m sharing this post and these slides because I wasn’t able to find them anywhere online while I was preparing for a recent 360 feedback session. I found plenty of articles about it, but nothing straight to the point.

The intention of the session is to gather as much feedback from your colleagues as possible in the easiest way possible. It’s like going out for beers with your team, but different ;).

I hope that the slides are self-explanatory and that they will serve as a good baseline when you do your own 360 feedback sessions. Good luck and have fun!

The Phoenix Project — book review


I read The Phoenix Project about two years ago. It is one of the most insightful books about running an IT organization that I encountered so far. In this post, I’ll try to give you an idea of how awesome it is, without giving too many spoilers.

It was 22:30 when I opened The Phoenix Project for the first time. I stopped reading at 1:00. On the second and third night, the story repeated. Luckily, the weekend came, and I managed to get back lost sleep.

The story is exciting: The main character gets a promotion from IT manager to VP of IT Operations in a big company. He’s not told that the IT organization is crippled and his job is to get it healthy again. In his new role, he encounters various IT operations related obstacles: severity one incidents, weekend-long deployments and impossible business/security requirements.

Various IT problems and their solutions appear in the first part of the book. Further on, things start to stabilize for the organization, and you get the reward of a happy ending ?. Nonetheless, I was surprised by the unrealistic ability of the IT organization to go through an organizational culture change process smoothly (a good reminder that this is a work of fiction).

The idea that impressed me most was the comparison of the IT organization with a factory. Looking back at my development experience, I observe that there are a lot more similarities between a factory worker and a software engineer than I’m comfortable to admit.

The Phoenix Project introduces various currently-used-in-IT concepts:

The main ideas and concepts are summarized again in the last pages of the book. Along with them, you will also find the resources (1, 2) that inspired the authors used when building up the plot. I recommend that you read at least this part of the book.

People working in tech will appreciate the quick intro into DevOps and IT management. I doubt that non-techies will find the book interesting.

Originally published here.

JCrete 2018 was amazing

I know it’s after the fact, but I want to share my experience at JCrete2018. I encourage you to join the invite lottery when it opens up in December.

A disclaimer: JCrete overwhelmed me and I am not able to do it justice in this post. The participants were incredibly knowledgeable and I felt humbled many times during the sessions. Initially, I sat through sessions just absorbing new information. Slowly, questions started popping up in my mind, but I was blocked by the fear that I wouldn’t have anything interesting to say. Eventually, my curiosity took over and I had some good chats with some of the big guys (Robert Scholte, Cliff Click, Heinz Kabutz, Ivan Krylov and Chris Newland). It turns out they were approachable and very inspiring.

JCrete lasted for five days and had about four sessions and about two leisure activities per day. It was common for the session discussions to continue during the relax time. If you’re not familiar with the “unconference” concept, I have a friend who was there and explains it well.

Monday’s highlight — Challenges of AOT

I didn’t know anything about AOT compilation, so I went. During the session I realized that the Java ecosystem is vast and the technology behind it is, to say the least, sophisticated. This session made it clear that some smart computer scientists are working on the JVM and the Java language.

I wanted to learn more about this theme and Ivan Krylov recommended this video about JIT.

Tuesday’s highlight — Java mentors

Some time ago I realized the value of having mentors. In this session, we discovered that the mentor has expectations from the mentee: learn, show interest, develop soft skills and act on the previous points. But the mentee also has expectations: code reviews, get in contact with new tools and processes. Another discovery is that finding a mentor is not that hard: just reach out to them and show them your dedication.

Wednesday’s highlight — GDPR

There were three people in the room that had implemented GDPR. The session was focused on the technical implications of applying the law. Basically, it turned out to be a crash course on the subject. The basics are:

  • GDPR applies to you if you handle personal data of individuals (e.g. customers, employees) that are EU citizens.
  • categories like sexual orientation, religion, ethnicity are also considered personal data.
  • the scope of the GDPR responsibility is as broad as possible: you’re even responsible for 3rd parties that process your data.
  • everything needs to be accounted for: clear documentation of data storage, data handling procedures, on-demand/automatic data deletion procedures.
  • opt-ins need to be clear and explicit.
  • everything needs to be audited every year.
  • you need to have a point of contact in one of the EU member states.

Thursday’s highlight — Communication for introverts

I was surprised at the number of people that joined this session. And, judging by the number of people engaged in the conversation, it seems this is a hot topic. We shared useful tips&tricks of how to deal with unexpected work situations. These are just a few:

  • if you get angry, go out and do something physical to consume the anger.
  • start labeling people as green (they have a significant positive impact on your life), yellow (so and so) and red (they hurt you in any way). Get rid of the reds (e.g. switch jobs, end friendships). Be strict about it.
  • it’s the manager’s responsibility to solve many of the issues that appear in the workplace. You don’t need to take it upon yourself to fix them.

Some of the ideas from that session are also in this talk.

Friday’s highlight — Contributing to maven

On the Hackday I went to the session lead by Robert Scholte. He introduced us to contributing to open source by fixing maven defects. It turns out it’s not as hard as it seems:

  • start small: pick a plugin that you’re interested in, but don’t go for the big ones (e.g. compiler, surefire).
  • open the project page and locate the “Issue Management” page and then open up the Jira board for that plugin.
  • pick a simple bug. That’s it.
  • bonus: Robert added a label recently for easy bugs (i.e. up-for-grabs).

JCrete 2018 was marvelous and I hope to go there again in the following years. But I’ll have to join the lottery in December, just like the rest of the mortals 🙂

Maven plugins that I like

I’ve been through a couple of projects until now and I’m noticing that there are a couple of maven plugins that are useful, but are not that famous. I’ll list them below, together with a small explanation of why I find them useful.

Build Number Maven Plugin (org.codehaus.mojo)

Sometimes you want to expose the current build version of your application without necessarily updating the artifact version (when you’re iterating fast and using *-SNAPSHOT). I didn’t get to use this one, but I can image that it would be good to have in a /info endpoint.

Apache Maven Shade Plugin

Sometimes you need to package all your dependencies into a single jar file. If you’re using Spring-Boot, you’ll have an easier life if you use the spring-boot-maven-plugin. Most of the time I use it for small projects, where I want to keep everything compact.

License Maven Plugin (org.codehaus.mojo)

Sometimes you need to add a license header to all the files of your project or you’ll need to pack with you application a file with all the licenses of the 3rd party libraries that you’re using. I wish i would have known about it three years ago when I was updating THIRD-PARTY.txt files manually.

Versions Maven Plugin

Sometimes you want to update the libraries that you’re using to the latest version. This plugin makes managing that update a lot easier. You have good control over the strategy of the version advance. This plugin makes your life easier, assuming you have enough tests to make sure that the update didn’t break your application. I just discovered this plugin and I like that it makes the library update chore a breeze.

Apache Maven Enforcer Plugin

Sometimes you need to make it clear what versions of OS, Java etc the application can be built with (because of incompatibilities). I like it that the set of already-made rules is extensive.

Apache Maven Checkstyle Plugin

Sometimes you need to enforce the codestyle that the team has agreed to. A bonus is that it helps the new members of the team to pick up the code standards immediately. I have a love and hate relationship with this plugin, but overall I’m happy to see the same kind of code everywhere.

OWASP Dependency-Check Plugin

You always want to have an application that is as secure as possible. This plugin allows you to automatically check for the latest reported vulnerabilities. I knew about the OWASP Top Ten for a couple of years and I learned about this plugin 2 months ago. I’ll never stop using it.

One last thing: I’m new at writing posts. If you have any suggestions to make this post more clear, please write a comment.

Java Unconferences/Open-spaces

This year I attended CITCON and then I searched for a similar Java unconference/open-space. I got super lucky to be accepted at JCrete, but that’s another story. The reality is that unconferences are mind-blowing, but a bit hard to find if you don’t know what to look for.

These are the unconferences that I currently know about:

Join one… it’s going to be worth it.

One last thing: I’m new at writing posts. If you have any suggestions to make this post more clear, please write a comment.

Testing JPQL queries straight from Intellij

In my current project, most of the queries are written in JPQL (Java Persistence Query Language). As with any xxx-QL (that eventually gets translated to SQL), it’s cumbersome to do the translation of the xxx-QL to SQL and back. This translation is generally done when you’re creating a new query or trying to debug an existing query. It would be great to be able to send JPQL queries directly to the DB.

One way to do this is to configure the JPA console in IntelliJ IDEA. Note that this feature is only available in the paid Edition.

For those that are in a rush, this is the minimal configuration needed to get the JPA console going. To keep things simple, let’s assume that you have a single module project, called test-jpa:

  1. Add a new data source to the project (View | Tool Windows | Database). This data source should point to the same DB that your entities use.
  2. Add “JavaEE Persistence” framework support to test-jpa (right click module | Add framework support…). Click OK
  3. Open the Persistence Window (View | Tool Windows | Persistence)
  4. In this window, you will assign a data source to test-jpa (right click module | Assign data sources)
  5. In the Assign Data Sources window, you will see a line with the value “Entities” which points to an empty Data Source field. Click on this field and select the data source from step 1. Click OK.
  6. In the Persistence Window, expand the module and right click on Entities | Console. You have a choice between JPA and Hibernate Console.

Some cool features that both Consoles support:

  • Navigating to the declaration of a class or field
  • Auto-completion
  • Parameterized queries

Reference: https://www.jetbrains.com/help/idea/using-jpa-console.html

One last thing: I’m new at writing posts. If you have any suggestions to make this post more clear, please write a comment.

Inspecting in-mem HSQLDB

Sometimes, for automatic testing of your Java application, you need to configure a DB connection. Most of the times the decision is to go for an in-memory database and HSQLDB is a prime candidate.

And some other times your tests will be failing and it would be great to see the DB status before the failure. I already knew of the option of running HSQLDB as a server, but a colleague showed me a simpler way, with less configuration.

Simply add the following lines to the beginning or the test:

System.setProperty(“java.awt.headless”, “false”); // to prevent an exception related to awt running headless
DatabaseManagerSwing.main(new String[]{“--url”, “jdbc:hsqldb:mem:testdb”, “--user”, “sa”, “--password”, “”}); //to launch a separate a simple DB-inspector window

Normally, you’re going to put a breakpoint in your test right before the failing assertions. If the DB-inspector window freezes it’s because the breakpoint is configured to stop all threads. You will need to configure your IDE to not stop the DB-inspector thread (on IntelliJ IDEA, right click on the breakpoint and you’ll get the menu that allows you to change this).


One last thing: I’m new at writing posts. If you have any suggestions to make this post more clear, please write a comment.