-
Notifications
You must be signed in to change notification settings - Fork 22
Rails Integration
This document will guide you through the process of integrating rage-rb into the existing Rails application - rage-rb/rails-integration-app.
The application is a demo software for shipping companies to book a time slot to load or unload goods at a warehouse. The booking process consists of three stages:
- a welcome page where users can select the day and duration of their booking;
- an available slots page where users can select a specific timeslot;
- a confirmation page;
Have a peek at the deployed application at https://clumsy-squirrel-4315.pages.dev. You can find the complete example at rage-rb/rails-integration-app#2.
Our final goal is to have Rage process HTTP requests, while everything else (code loading, ActiveRecord, etc..) will still be handled by Rails.
First, let's add the gem. We will update the Gemfile
:
--- a/Gemfile
+++ b/Gemfile
@@ -6,6 +6,8 @@ ruby "3.2.2"
# Bundle edge Rails instead: gem "rails", github: "rails/rails", branch: "main"
gem "rails", "~> 7.0.6"
+gem "rage-rb"
+
# Use pg as the database for Active Record
gem "pg"
And require the gem in config/application.rb
:
--- a/config/application.rb
+++ b/config/application.rb
@@ -47,3 +47,6 @@ module SlotBooking
end
end
end
+
+# make sure to require this file after requiring Rails and defining the application class
+require "rage/rails"
Now, let's update our application to use Rage instead of Rails to handle incoming requests. We will need to update:
-
config/routes.rb
to make the gem aware of the available routes:
--- a/config/routes.rb
+++ b/config/routes.rb
@@ -1,3 +1,3 @@
-Rails.application.routes.draw do
+Rage.routes.draw do
resources :bookings, only: %i(index create)
end
-
ApplicationController
to update parent class for all our controllers:
--- a/app/controllers/application_controller.rb
+++ b/app/controllers/application_controller.rb
@@ -1,2 +1,2 @@
-class ApplicationController < ActionController::API
+class ApplicationController < RageController::API
end
-
config.ru
to route incoming requests to rage-rb:
--- a/config.ru
+++ b/config.ru
@@ -2,5 +2,4 @@
require_relative "config/environment"
-run Rails.application
-Rails.application.load_server
+run Rage.application
Also, we will now need to start the server using the rage s
command, so let's update the Dockerfile:
--- a/Dockerfile
+++ b/Dockerfile
@@ -5,4 +5,4 @@ COPY Gemfile* /app/
RUN bundle install
COPY . /app
EXPOSE 3000
-CMD bundle exec rails s -b 0.0.0.0
+CMD bundle exec rage s -b 0.0.0.0
Believe it or not, we are almost done. The last step is to configure rage-rb.
This step will differ for different applications. For this application, we will configure workers_count
to 1 to match Puma settings.
Also, we will add several middlewares. Rage has its own middleware stack, and by default, it doesn't copy middlewares from the Rails middleware stack.
For this app, we will add ActionDispatch::HostAuthorization
for all environments, and ActiveRecord::Migration::CheckPending
(to verify there are no pending migrations) in development.
--- a/config/application.rb
+++ b/config/application.rb
@@ -49,3 +49,17 @@ module SlotBooking
end
require "rage/rails"
+
+ # make sure to configure rage-rb inside the `Rails.configuration.after_initialize` block
+
+Rails.configuration.after_initialize do
+ Rage.configure do
+ config.server.workers_count = 1
+
+ config.middleware.use ActionDispatch::HostAuthorization
+ if Rails.env.development?
+ config.middleware.use ActiveRecord::Migration::CheckPending
+ end
+ end
+end
+
Also, let's update the CORS middleware. Initially, the application was using Rack::Cors
. And while rage-rb is fully compatible with Rack::Cors
, for simpler cases it is recommended to use the built-in Rage::Cors middleware.
--- a/config/application.rb
+++ b/config/application.rb
@@ -35,16 +35,6 @@ module SlotBooking
# Middleware like session, flash, cookies can be added back manually.
# Skip views, helpers and assets when generating a new resource.
config.api_only = true
-
- config.middleware.insert_before 0, Rack::Cors do
- allow do
- origins "localhost:5173", "https://clumsy-squirrel-4315.pages.dev"
-
- resource "*",
- headers: :any,
- methods: [:get, :post, :put, :patch, :delete, :options, :head]
- end
- end
end
end
@@ -55,6 +45,10 @@ Rails.configuration.after_initialize do
config.server.workers_count = 1
config.server.port = 3000
+ config.middleware.use Rage::Cors do
+ allow "localhost:5173", "https://clumsy-squirrel-4315.pages.dev"
+ end
+
config.middleware.use ActionDispatch::HostAuthorization
if Rails.env.development?
config.middleware.use ActionDispatch::Reloader
And that's it! Let's now see what we have accomplished with these changes.
For performance testing, we will emulate real-life conditions using this k6 script. The tests were performed with Ruby 3.2.2 on an AWS EC2 t2.medium instance.
Rails results
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: k6.js
output: -
scenarios: (100.00%) 1 scenario, 50 max VUs, 1m30s max duration (incl. graceful stop):
* bookings: 25.00 iterations/s for 1m0s (maxVUs: 50, gracefulStop: 30s)
✓ response code was 200
✓ response code was 200 or 409
✓ CORS header is present
checks.........................: 100.00% ✓ 1757 ✗ 0
data_received..................: 1.4 MB 23 kB/s
data_sent......................: 212 kB 3.4 kB/s
http_req_blocked...............: avg=19.03µs min=1.45µs med=7.81µs max=992.64µs p(90)=11.31µs p(95)=27.25µs
http_req_connecting............: avg=6.63µs min=0s med=0s max=469.75µs p(90)=0s p(95)=0s
http_req_duration..............: avg=949.76ms min=3.77ms med=825.51ms max=59.78s p(90)=1.72s p(95)=2.4s
{ expected_response:true }...: avg=949.76ms min=3.77ms med=825.51ms max=59.78s p(90)=1.72s p(95)=2.4s
http_req_failed................: 0.00% ✓ 0 ✗ 1629
http_req_receiving.............: avg=86.57µs min=38.1µs med=83.14µs max=1.02ms p(90)=105.64µs p(95)=117.76µs
http_req_sending...............: avg=35.78µs min=14.57µs med=34.59µs max=243.32µs p(90)=42.98µs p(95)=55.25µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=949.64ms min=3.69ms med=825.41ms max=59.78s p(90)=1.72s p(95)=2.4s
http_reqs......................: 1629 26.333117/s
iteration_duration.............: avg=1.03s min=4.92ms med=918.01ms max=59.79s p(90)=1.75s p(95)=2.57s
iterations.....................: 1501 24.263971/s
vus............................: 22 min=1 max=45
vus_max........................: 50 min=50 max=50
running (1m01.9s), 00/50 VUs, 1501 complete and 0 interrupted iterations
bookings ✓ [======================================] 00/50 VUs 1m0s 25.00 iters/s
Rage results
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: k6.js
output: -
scenarios: (100.00%) 1 scenario, 50 max VUs, 1m30s max duration (incl. graceful stop):
* bookings: 25.00 iterations/s for 1m0s (maxVUs: 50, gracefulStop: 30s)
✓ response code was 200
✓ response code was 200 or 409
✓ CORS header is present
checks.........................: 100.00% ✓ 1868 ✗ 0
data_received..................: 1.6 MB 26 kB/s
data_sent......................: 229 kB 3.8 kB/s
http_req_blocked...............: avg=17.38µs min=673ns med=7.95µs max=428.2µs p(90)=11.61µs p(95)=26.63µs
http_req_connecting............: avg=6.2µs min=0s med=0s max=329.57µs p(90)=0s p(95)=0s
http_req_duration..............: avg=4.13ms min=2.96ms med=3.96ms max=30.31ms p(90)=4.77ms p(95)=5.11ms
{ expected_response:true }...: avg=4.13ms min=2.96ms med=3.96ms max=30.31ms p(90)=4.77ms p(95)=5.11ms
http_req_failed................: 0.00% ✓ 0 ✗ 1684
http_req_receiving.............: avg=76.82µs min=35.88µs med=76.92µs max=197.13µs p(90)=90.17µs p(95)=96.37µs
http_req_sending...............: avg=37.46µs min=12.88µs med=35.39µs max=120.42µs p(90)=42.3µs p(95)=59.27µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=4.01ms min=2.82ms med=3.85ms max=30.04ms p(90)=4.66ms p(95)=4.98ms
http_reqs......................: 1684 28.066327/s
iteration_duration.............: avg=4.93ms min=3.3ms med=4.32ms max=31.13ms p(90)=7.84ms p(95)=8.53ms
iterations.....................: 1500 24.999697/s
vus............................: 0 min=0 max=1
vus_max........................: 50 min=50 max=50
running (1m00.0s), 00/50 VUs, 1500 complete and 0 interrupted iterations
bookings ✓ [======================================] 00/50 VUs 1m0s 25.00 iters/s
We can see that while the p95 latency increases to 2.4 seconds with Rails, it stays constant at 5ms with Rage. While the results might differ with different Puma settings, the test shows that Rage can better handle traffic spikes and process many more requests using the same hardware.
Rage supports having both Rails and Rage controllers in the same application. This allows you to integrate Rage into the Rails applications with both full-stack and API parts. Additionally, you can use this feature to migrate to Rage gradually, controller by controller.
To use this feature, update Rage controllers to inherit from RageController::API
, leaving Rails controllers intact. Then, update your config.ru
file to run Rage.multi_application
:
run Rage.multi_application
Let's now imagine we need to add another step to our application - sending a confirmation email to the user. In the real world, we would use a tool like Sidekiq to ensure the users are not blocked while waiting for the email to be sent.
However, to better understand the framework, we can emulate background processing using fibers. Normally, you would only need to manually schedule fibers to process several requests in parallel. As a rule of thumb, you should always use Fiber.await
when manually creating fibers, either right away:
Fiber.await([
Fiber.schedule { request_1 },
Fiber.schedule { request_2 }
])
or at some point later:
fiber_1 = Fiber.schedule { request_1 }
fiber_2 = Fiber.schedule { request_2 }
# do something else...
Fiber.await([fiber_1, fiber_2])
But what happens if we don't await a fiber? Let's check the following example:
def index
Fiber.schedule do
sleep(2)
puts "hello from a fiber"
end
render status: :ok
end
In this action, Ruby will start processing the fiber right after it has been scheduled. However, once the fiber reaches a blocking call (i.e. sleep
), it will yield back to the action, allowing the server to send the response back to the user instantly. Only two seconds later will we see the "hello from a fiber" text in the console.
This looks exactly like background processing!
Let's now update our CreateBooking
interactor to send our imaginary email in the background. Due to the nature of such "detached" fibers, all exceptions raised inside them are silently ignored. So, it is also important to add comprehensive logging:
--- a/app/interactors/create_booking.rb
+++ b/app/interactors/create_booking.rb
@@ -1,7 +1,21 @@
class CreateBooking
def call(start_time:, end_time:)
Booking.create!(start_time: start_time, end_time: end_time)
+ Fiber.schedule { send_confirmation_email }
rescue ActiveRecord::StatementInvalid
:conflict
end
+
+ private
+
+ def send_confirmation_email
+ Rage.logger.tagged("ConfirmationEmail") do
+ HTTP.post("https://httpbin.org/delay/2")
+ Rage.logger.info "ok"
+ rescue => e
+ Rage.logger.error e
+ end
+ end
end
If you now try to book a slot, you will see that the server responds to the user instantly. Several seconds later, once the "email" is sent, a corresponding entry will be logged:
[qmmgvhdlzz2jesej] timestamp=2024-01-04T17:30:12+00:00 pid=17673 level=info method=POST path=/bookings controller=bookings action=create status=200 duration=6.78
...
[qmmgvhdlzz2jesej][ConfirmationEmail] timestamp=2024-01-04T17:30:14+00:00 pid=17673 level=info message=ok