Today, I set up a WordPress instance on Amazon Lightsail. It’s a nifty little service, that allows you to very easily launch and manage a virtual private server with AWS. You can find more information about Lightsail here. Helpfully, this same article also guides you in launching a WordPress instance.
Lightsail’s WordPress instance comes with automatically-generated dummy (self-signed) SSL/TLS certificates. That means that when I try to access my website using HTTPS, I get a certificate warning. Not great.
Luckily, there’s a great complementing service called Let’s Encrypt which can help solve this issue. Let’s Encrypt is a free, automated and open certificate authority. We’ll use it to generate valid certificates for our new WordPress instance.
Follow these instructions:
- Get your WordPress instance running on Lightsail.
- Forward your domain to the instance’s public IP. For example, for the domain example.com this usually this means an A DNS record for example.com and CNAME DNS record for www.example.com to example.com.
- Verify that your website is accessible via HTTP and HTTPS. You’ll get a warning about the HTTPS certificate.
- SSH into your instance.
- Create a temporary directory:
- Install certbot as explained here:
chmod a+x certbot-auto
- Create a .well-known directory in the WordPress htdocs directory:
- Create a .htaccess file in that directory:
- Add the following contents to the .htaccess file, to make the .well-known directory accessible:
# Override overly protective .htaccess in webroot
You can edit the file using nano or vi, e.g.:
- Run certbot. Make sure you configure everything as expected and input a real email address when required:
./certbot-auto certonly --webroot -w /home/bitnami/apps/wordpress/htdocs/ -d example.com -d www.example.com
Of course, change example.com to the name of your domain.
- If all executes as expected, you’ll see a message congratulating you for successfully acquiring the certificates you required.
- Next, edit the Apache configuration file, as explained here:
sudo vi /opt/bitnami/apache2/conf/bitnami/bitnami.conf
Comment out (by adding a # in the beginning of the line) the following lines:
Add the following lines below:
# Let's Encrypt
Of course, change example.com to the name of your domain.
- Finally, restart Apache:
sudo /opt/bitnami/ctlscript.sh restart apache
You should see the following output:
/opt/bitnami/apache2/scripts/ctl.sh : httpd stopped
/opt/bitnami/apache2/scripts/ctl.sh : httpd started at port 80
- Done! You can check to see whether the correct certificate appears when you access our website at http://www.example.com
Note that Let’s Encrypt certificates expire after 90 days. As explained here, you can either manually renew the certificates every 90 or so days (simply by executing steps 10 and 13), or add a Cronjob that automatically does this for you.
I am happy to announce that this January I’ve co-founded OptimalQ with my two amazing co-founders, Yechiel Levi and Yadin Haut.
What do we do in OptimalQ? As our website says:
OptimalQ’s proprietary technology harnesses a combination of big data statistical models and real time network information to intelligently look ahead at a set of mobile numbers and, without making a call, assess the physical and mental availability of each lead.
Our availability insights result in more people answering calls when they actually have the time to talk – meaning calls will be longer and likely more productive.
It has been an amazing ride so far, especially from a technical standpoint – I’ve had the time to finally design my “dream stack” (at least for an agile, time-to-market based business) and implement most of it.
We’re using Python as our technology of choice, with DynamoDB, Redis, MySQL as our main data stores. Sensu and Prometheus help us with monitoring and alerting. Logentries is our current logging solution, but ELK is in our future. All of our stack is based on microservices, using DNS-based service-discovery and SDN. We aren’t using containers and automatic orchestration as of yet, but that is coming very soon. We’re currently AWS based, but are most of our system is completely open-source and vendor agnostic which is great, as we’ve not bound to any one cloud (and might move in the near future).
It sure has been fun, let’s hope it stay this way. :)
As part of a hackathon we had in EverythingMe, I developed an am now releasing the initial version of raml, a RAML 0.8 parser implemented in Go.
RAML is a YAML-based language that describes RESTful APIs. Together with the YAML specification, this specification provides all the information necessary to describe RESTful APIs; to create API client-code and API server-code generators; and to create API user documentation from RAML API definitions.
The raml package enables Go programs to parse RAML files and valid RAML API definitions.
You can find the project here: github.com/go-raml/raml
Update, 2016-05-06: since I am currently quite busy with running OptimalQ, and as this project is in use by several other projects and has several active forks, I’m looking for someone to take over it and make it useful once again. Message me if you’re interested.
Dozens of developers from all over the country attended the first Hubanana hackathon, which was held last weekend in Raanana, Israel for around 24 hours. The focus this time was iBeacon, a technology which uses Bluetooth Low Energy proximity sensing to transmit a universally unique identifier that can be picked up by any compatible device, and which can then be used to determine a device’s physical location or to trigger a location-based action, among other possibilities.
It was a fun hackathon, and my team won first prize, which was an added bonus.
My team’s product for this hackathon was called BeaconTask. The idea was simple: leave beacons around the house, in specific “task stations”. When a family member arrives at this station (kitchen, backyard, etc) he can then receive a task, which is worth points. Example tasks can be: take out the trash, wash the dishes, etc. He can then accomplish this task and take a photograph of it, and the “Manager” (Mom/Dad/roommate/boss/etc) can verify the task has been completed. A verified task awards the person who performed it with points, which can then be used to receive various prizes or rewards. (e.g. allowance for kids, days off for an office)
During the hackathon, I improved python-firebase, a
I mostly worked on synchronizing the various team members, and also developed the back-end and data-model. All of the source code for our team’s project can be found here:
Articles (in Hebrew) regarding this hackathon can be found here and here.
I have recently discovered a bug in Python (both in the 2.x and 3.x families) and offered
a patch to solve the issue.
When imap() or imap_unordered() are called with the iterable parameter set as a generator function, and when that generator function raises an exception, then the _task_handler thread (running the method _handle_tasks) dies immediately, without causing the other threads to stop and without reporting the exception to the main thread (that one that called imap()).
I saw this issue in Python 2.7.8, 2.7.9 and 3.4.2. Didn’t check other versions, but I assume this is a bug in all Python versions since 2.6.
I reported this bug here and attached examples that reproduce this issue, as well as patches for both Python 2.7 and Python 3.4.
The patches I attached do 2 things:
- A deadlock is prevented, wherein the main thread waits forever for the Pool thread/s to finish their execution, while they wait for instructions to terminate from the _task_handler thread which has died. Instead, the exception are caught and handled and termination of the pool execution is performed.
- The exception that was raised is caught and passed to the main thread, and is re-thrown in the context of the main thread – hence the user catch it and handle it, or – at the very least – be aware of the issue.
Now I’m waiting for a review.
Update, 2015-03-06: patch was reviewed, tests were added, and it was merged into all
As part of my work on go-raml, I needed some additional capabilities from go-yaml, so I forked it and released my own version of it (until it’s merged in, if at all, since me and the main developer of go-yaml see things a bit differently). Here’s the details
* Added new regexp flag: Unmarshal all encountered YAML values with keys
that match the regular expression into the tagged field of a struct,
which must be a map or a slice of a type that the YAML value should
be unmarshaled into. [Unmarshaling only]
* Now dies in case of a badly formatted YAML tag in a struct field
* When a type implementing UnmarshalYAML calls the the unmarshaler func()
to unmarshal to a specific type, which fails, followed by it calling
the func() again with a different output value which suceeds, the YAML
unmarshaling process still failed. Issue was d.terrs == nil check, but
not len(d.terrs) == 0
* Lots of new tests for the regexp flag - regexp unmarshaling into maps,
slices, regexp priority etc.
Here’s the fork: github.com/advance512/yaml
As per antirez‘s feature request here, I implemented the following feature:
Added parameter to SPOP:
- spopCommand() now runs spopWithCountCommand() in case the param is found.
- Added intsetRandomMembers() to Intset: Copies N random members from the set into inputted ‘values’ array. Uses either the Knuth or Floyd sample algos depending on ratio count/size.
- Added setTypeRandomElements() to SET type: Returns a number of random elements from a non empty set. This is a version of setTypeRandomElement() that is modified in order to return multiple entries, using dictGetRandomKeys() and intsetRandomMembers().
- Added tests for SPOP with : unit/type/set, unit/scripting, integration/aof
More details can be found here.
Update, 2014-12-18: merged into all Redis branches.
Updated, 2016-05-06: officially a part of Redis 3.0.2, with various parts rewritten by antirez for better performance. More info here: Redis 3.2.0 is out!
I work as a developer in Metacafe. We use memcached as our caching system, and have been using it for a few years now with great results. A while ago, we thought of a neat little concept that made working with memcached much more convenient for us. We patched the memcached code in-house, and have been using this patched code for a while now. We are now offering the concept (and the source code) to the project.
The idea is as follows: currently, whenever an item which has expired is requested from a memcached server, the server immediately unlinks (deletes) the item internally and returns an empty (or null) item to the client. We propose, instead, returning an empty item to the client and extending the expiration time of the item by X seconds (we use 60 seconds), thus returning the expired item to all clients who ask for it in the next X seconds. This behavior repeats itself, after X seconds, and so forth. This behavior should be controlled by a command-line argument, and is off by default.
The benefit is that the client which receives the empty item can refresh the item however it deems fit. There is no race situation and no database “stampede”. The client can access a database and create a new item, storing it to server, knowing it is not racing another client. (Of course, if the server has just been launched or if the item is brand new – there might be a race, but this idea is not an attempt to fix this different issue.)
More information and the patch can be found here.