Start...End time | Track name |
---|---|
02:00 - 03:00 UTC |
tomoya ishida
Lang: ja
Track: Large Hall
Writing Weird CodeRuby is a great language to write readable code, and also to write unreadable weird code. In this talk, I will demonstrate how fun it is, and talk about the large effect of writing lots of weird code. Memo
Quieすごかった
他の発表の導入になるユースケースとしてのquie扱うのが何より響いて記憶に残った。
カンファレンスの最初にあるキーノート、全部こんな感じならいいなと思う
ああいう発表をできるようになりたい
|
04:30 - 05:00 UTC |
Samuel Giddins
Lang: en
Track: Large Studio
Remembering (ok, not really Sarah) MarshalThough the Marshal serialization format has fallen out of favor over the past decade, due to a lack cross-language interoperability and security vulnerabilities, I think there’s a lot to learn from it. Having recently reimplemented Marshal.load to sidestep the security concerns, I want to reintroduce the Ruby community to the gem (see what I did there?) that is binary serialization. Let’s walk through how Marshal works under the hood, and see what ideas from it we can salvage for a modern take on data serialization. Memo |
05:10 - 05:40 UTC |
Hiroshi SHIBATA
Lang: ja
Track: Small Hall
Long journey of Ruby standard libraryRuby has a lot of standard libraries from Ruby 1.8. I promote them democratically with GitHub today via default and bundled gems. So, I'm working to extract them for Ruby 3.4 continuously and future versions. It's long journey for me. After that, some versions may suddenly happen `LoadError` at `require` when running `bundle exec` or `bin/rails`, for example `matrix` or `net-smtp`. We need to learn what's difference default/bundled gems with standard libraries. In this presentation, I will introduce what's the difficult to extract bundled gems from default gems and the details of the functionality that Ruby's `require` and `bundle exec` with default/bundled gems. You can learn how handle your issue about standard libraries. Memo
gemのこととか標準ライブラリのこと何もわからないので楽しみ
今回のruby.wasmの発表との関連も気になる
|
05:50 - 06:20 UTC |
Satoshi Tagomori
Lang: ja
Track: Large Hall
Namespace, What and WhyNamespace is a feature in development to separate Ruby code, native extensions, and gems into separate spaces. The expected benefits of this feature are: * Making codes and libraries name-collision-free * Having isolated Module/Class instances * Loading different versions of libraries on a Ruby process This talk will introduce what the namespace is (will be), why I want this feature in Ruby, and how it will help your applications. Memo |
07:00 - 07:30 UTC |
Mari Imaizumi
Lang: ja
Track: Small Hall
Exploring Reline: Enhancing Command Line UsabilityReline is a pure Ruby implementation of GNU Readline; GNU Readline allows you to write configuration in `.inputrc`, and Reline reads this configuration file and sets key bindings. However, there are many things that GNU Readline can do that Reline cannot. This session will introduce those features and talk about their implementation in Reline. Memo |
07:40 - 08:10 UTC |
Koichi Sasada
Lang: en
Track: Large Hall
Ractor Enhancements, 2024This talk presents recent updates to Ractor, which enables parallel and concurrent programming on Ruby. Ractor still lacks fundamental features. For example, we cannot use “require” method and “timeout” methods on non-main Ractors because of synchronization and implementation issues. We will discuss such problems and how to solve them. From a performance point of view, we have introduced the M:N thread scheduler in Ruby 3.3 and we will show the performance analysis with recent improvements. Memo |
02:00 - 03:00
Ruby is a great language to write readable code, and also to write unreadable weird code. In this talk, I will demonstrate how fun it is, and talk about the large effect of writing lots of weird code.
04:30 - 05:00
Though the Marshal serialization format has fallen out of favor over the past decade, due to a lack cross-language interoperability and security vulnerabilities, I think there’s a lot to learn from it. Having recently reimplemented Marshal.load to sidestep the security concerns, I want to reintroduce the Ruby community to the gem (see what I did there?) that is binary serialization. Let’s walk through how Marshal works under the hood, and see what ideas from it we can salvage for a modern take on data serialization.
05:10 - 05:40
Ruby has a lot of standard libraries from Ruby 1.8. I promote them democratically with GitHub today via default and bundled gems. So, I'm working to extract them for Ruby 3.4 continuously and future versions. It's long journey for me. After that, some versions may suddenly happen `LoadError` at `require` when running `bundle exec` or `bin/rails`, for example `matrix` or `net-smtp`. We need to learn what's difference default/bundled gems with standard libraries. In this presentation, I will introduce what's the difficult to extract bundled gems from default gems and the details of the functionality that Ruby's `require` and `bundle exec` with default/bundled gems. You can learn how handle your issue about standard libraries.
05:50 - 06:20
Namespace is a feature in development to separate Ruby code, native extensions, and gems into separate spaces. The expected benefits of this feature are: * Making codes and libraries name-collision-free * Having isolated Module/Class instances * Loading different versions of libraries on a Ruby process This talk will introduce what the namespace is (will be), why I want this feature in Ruby, and how it will help your applications.
07:00 - 07:30
Reline is a pure Ruby implementation of GNU Readline; GNU Readline allows you to write configuration in `.inputrc`, and Reline reads this configuration file and sets key bindings. However, there are many things that GNU Readline can do that Reline cannot. This session will introduce those features and talk about their implementation in Reline.
07:40 - 08:10
This talk presents recent updates to Ractor, which enables parallel and concurrent programming on Ruby. Ractor still lacks fundamental features. For example, we cannot use “require” method and “timeout” methods on non-main Ractors because of synchronization and implementation issues. We will discuss such problems and how to solve them. From a performance point of view, we have introduced the M:N thread scheduler in Ruby 3.3 and we will show the performance analysis with recent improvements.
Start...End time | Track name |
---|---|
01:20 - 02:20 UTC |
Samuel Williams
Lang: en
Track: Large Hall
Leveraging Falcon and Rails for Real-Time InteractivityIn the rapidly evolving landscape of web-based gaming, Ruby's potential for building dynamic, real-time interactive experiences is often underrated. This talk aims to shatter this misconception by demonstrating the powerful synergy between Falcon, an asynchronous web server, and Ruby on Rails, the stalwart of web application frameworks. We will embark on a journey to design and implement a real-time interactive game from the ground up, showcasing how Ruby, when coupled with Falcon's concurrency capabilities, can be a formidable tool in the gaming domain. Key focus areas will include leveraging Falcon's event-driven architecture for managing high-throughput, low-latency game data, and integrating it seamlessly with Rails to create an engaging user experience. Attendees will gain insights into the nuances of real-time web communication in Ruby, efficient handling of WebSockets, and the application of Rails' robust features in a gaming context. Memo |
02:30 - 03:00 UTC |
Peter Zhu
Adam Hess
Lang: en
Track: Large Hall
Finding Memory Leaks in the Ruby EcosystemRuby 3.3 introduces a powerful new feature for identifying memory leaks. Over the past year we have been working on improving memory usage within Ruby and developing tools to give native extension authors more confidence in memory management. In this talk, we will explain what memory leaks are, the impacts of memory leaks, our new feature RUBY_FREE_AT_EXIT in Ruby 3.3, and memory leaks found through this feature. In addition, we will discuss our future roadmap for Ruby 3.4 to improve this feature for native gem maintainers. Memo |
04:30 - 05:00 UTC |
Yuta Saito
Lang: en
Track: Small Hall
RubyGems on ruby.wasmRunning gems on WebAssembly is one of the most requested feature from the initial release of `ruby.wasm`. Today, `ruby.wasm` experimentally supports RubyGems integration, thanks to a recent WebAssembly ecosystem evolution called [Component Model](https://github.com/WebAssembly/component-model). It supports packaging your Ruby application and gem dependencies in a WebAssembly program! This talk will demonstrate the integration and share how it works. I hope it will unlock your interesting ideas. Memo |
05:10 - 05:40 UTC |
Masaki Hara
Lang: en
Track: Large Studio
Getting along with YAML comments with Psychpsych-comments allows you to manipulate YAML documents without discarding comments. This talk involves how we tried to automate YAML authoring, how we have gone wrong by (ab)using YAML tags for annotations, and how we solved the problem by bringing this library into being. Audiences will get a grasp of YAML's depths and know how a small library helps automation. Memo |
05:50 - 06:20 UTC |
Maxime Chevalier-Boisvert
Lang: en
Track: Large Hall
Breaking the Ruby Performance BarrierWith each of the past 3 Ruby releases, YJIT has delivered higher and higher performance. However, we are seeing diminishing returns, because as JIT-compiled code becomes faster, it makes up less and less of the total execution time, which is now becoming dominated by C function calls. As such, it may appear like there is a fundamental limit to Ruby’s performance. In the first half of the 20th century, some early airplane designers thought that the speed of sound was a fundamental limit on the speed reachable by airplanes, thus coining the term “sound barrier”. This limit was eventually overcome, as it became understood that airflow behaves differently at supersonic speeds. In order to break the Ruby performance barrier, it will be necessary to reduce the dependency on C extensions, and start writing more gems in pure Ruby code. In this talk, I want to look at this problem more in depth, and explore how YJIT can help enable writing pure-Ruby software that delivers high performance levels. Memo |
07:00 - 07:30 UTC |
Benoit Daloze
Lang: en
Track: Large Studio
From Interpreting C Extensions to Compiling ThemSince the start, TruffleRuby took a unique approach to support C (and C++) extensions: to interpret and just-in-time compile them. This gave some unique advantages like optimizing and inlining C and Ruby together in the JIT and being able to debug C and Ruby in a single debugger. However it also has some downsides including long warmup times (it takes a while to JIT compile all the C extension code), compatibility with huge C extensions (e.g. grpc) and slower installation of C extensions gems. In the last release, TruffleRuby changed the approach to run C extensions natively, like CRuby. In this talk, we would like to tell you this story, illustrate the challenges and discuss which parts of the C API could be improved. We also explore how to run C extensions faster by using “Inline caches in C”, which could also be applied in CRuby. Come and learn from 10 years of implementing and optimizing C extensions in various ways! Memo |
07:40 - 08:10 UTC |
monochrome
Lang: ja
Track: Small Hall
Running Optcarrot (faster) on my own Ruby.These past few years, I have been working on a yet another Ruby implementation named "monoruby". *Monoruby* is written in Rust, consists of a parser, garbage collector, bytecode-based interpreter, and just-in-time compiler. All of these components were built from scratch. This is not just a toy project; we ran the Optcarrot benchmark on *monoruby* and its performance was comparable to other modern and fast Ruby implementations such as YJIT and TruffleRuby. In this talk, I would like to present the design and implementation details of *monoruby*. Memo |
01:20 - 02:20
In the rapidly evolving landscape of web-based gaming, Ruby's potential for building dynamic, real-time interactive experiences is often underrated. This talk aims to shatter this misconception by demonstrating the powerful synergy between Falcon, an asynchronous web server, and Ruby on Rails, the stalwart of web application frameworks. We will embark on a journey to design and implement a real-time interactive game from the ground up, showcasing how Ruby, when coupled with Falcon's concurrency capabilities, can be a formidable tool in the gaming domain. Key focus areas will include leveraging Falcon's event-driven architecture for managing high-throughput, low-latency game data, and integrating it seamlessly with Rails to create an engaging user experience. Attendees will gain insights into the nuances of real-time web communication in Ruby, efficient handling of WebSockets, and the application of Rails' robust features in a gaming context.
02:30 - 03:00
Ruby 3.3 introduces a powerful new feature for identifying memory leaks. Over the past year we have been working on improving memory usage within Ruby and developing tools to give native extension authors more confidence in memory management. In this talk, we will explain what memory leaks are, the impacts of memory leaks, our new feature RUBY_FREE_AT_EXIT in Ruby 3.3, and memory leaks found through this feature. In addition, we will discuss our future roadmap for Ruby 3.4 to improve this feature for native gem maintainers.
04:30 - 05:00
Running gems on WebAssembly is one of the most requested feature from the initial release of `ruby.wasm`. Today, `ruby.wasm` experimentally supports RubyGems integration, thanks to a recent WebAssembly ecosystem evolution called [Component Model](https://github.com/WebAssembly/component-model). It supports packaging your Ruby application and gem dependencies in a WebAssembly program! This talk will demonstrate the integration and share how it works. I hope it will unlock your interesting ideas.
05:10 - 05:40
psych-comments allows you to manipulate YAML documents without discarding comments. This talk involves how we tried to automate YAML authoring, how we have gone wrong by (ab)using YAML tags for annotations, and how we solved the problem by bringing this library into being. Audiences will get a grasp of YAML's depths and know how a small library helps automation.
05:50 - 06:20
With each of the past 3 Ruby releases, YJIT has delivered higher and higher performance. However, we are seeing diminishing returns, because as JIT-compiled code becomes faster, it makes up less and less of the total execution time, which is now becoming dominated by C function calls. As such, it may appear like there is a fundamental limit to Ruby’s performance. In the first half of the 20th century, some early airplane designers thought that the speed of sound was a fundamental limit on the speed reachable by airplanes, thus coining the term “sound barrier”. This limit was eventually overcome, as it became understood that airflow behaves differently at supersonic speeds. In order to break the Ruby performance barrier, it will be necessary to reduce the dependency on C extensions, and start writing more gems in pure Ruby code. In this talk, I want to look at this problem more in depth, and explore how YJIT can help enable writing pure-Ruby software that delivers high performance levels.
07:00 - 07:30
Since the start, TruffleRuby took a unique approach to support C (and C++) extensions: to interpret and just-in-time compile them. This gave some unique advantages like optimizing and inlining C and Ruby together in the JIT and being able to debug C and Ruby in a single debugger. However it also has some downsides including long warmup times (it takes a while to JIT compile all the C extension code), compatibility with huge C extensions (e.g. grpc) and slower installation of C extensions gems. In the last release, TruffleRuby changed the approach to run C extensions natively, like CRuby. In this talk, we would like to tell you this story, illustrate the challenges and discuss which parts of the C API could be improved. We also explore how to run C extensions faster by using “Inline caches in C”, which could also be applied in CRuby. Come and learn from 10 years of implementing and optimizing C extensions in various ways!
07:40 - 08:10
These past few years, I have been working on a yet another Ruby implementation named "monoruby". *Monoruby* is written in Rust, consists of a parser, garbage collector, bytecode-based interpreter, and just-in-time compiler. All of these components were built from scratch. This is not just a toy project; we ran the Optcarrot benchmark on *monoruby* and its performance was comparable to other modern and fast Ruby implementations such as YJIT and TruffleRuby. In this talk, I would like to present the design and implementation details of *monoruby*.
Start...End time | Track name |
---|---|
01:10 - 02:20 UTC |
CRuby Committers
Lang: ja
Track: Large Hall
Ruby Committers and the WorldCRuby committers on stage! Memo |
02:30 - 03:00 UTC |
Takashi Kokubun
Lang: en
Track: Large Hall
YJIT Makes Rails 1.7x FasterHave you enabled Ruby 3.3 YJIT? You’re using a much slower Ruby if you haven’t. YJIT makes Railsbench 1.7x faster. In production, YJIT presents a 17% speedup to millions of requests per second at Shopify. Why does YJIT make Ruby so much faster? In this talk, you’ll explore the latest YJIT optimizations that have a huge impact on your application’s performance. Once you understand what you're missing out on, you can't help but enable YJIT. Memo |
04:30 - 05:00 UTC |
Aaron Patterson
Lang: ja
Track: Large Hall
Speeding up Instance Variables with Red-Black TreesThe introduction of Object Shapes helped speed up cached instance variable reads as well as decreased the machine code required for JIT compilation. But what about cache misses? Is there any way we can speed up instance variable access in that case? Ruby 3.3 introduced a red-black tree cache to speed up instance variable cache misses. Let’s learn how instance variables are implemented, and how the red black tree cache speeds them up! Memo |
05:10 - 05:40 UTC |
Junichi Kobayashi
Lang: ja
Track: Small Hall
From LALR to IELR: A Lrama's Next StepIn parse.y, there is a variable that represents the state of the Lexer, based on the idea that the Parser and the Lexer can be separated. However, in reality, the state of Parser and Lexer are shared and cannot be said to be separated. Also, the Lexer state is manually managed, but historical history has clouded the view, and additions and modifications must be done with care. With the replacement of the parser generator from Bison to Lrama, the time has come to attack parse.y from the parser generator side, and Lrama is trying to solve this problem by generating an algorithmic parser called PSLR. As a first step, I will show how Lrama can generate a parser for a new algorithm called IELR, which is a prerequisite for PSLR. IELR is an improved version of LALR and can parse grammars that LALR cannot. In this presentation, I will explain the implementation of Lrama and how the parser is actually generated. Memo |
05:50 - 06:20 UTC |
KJ Tsanaktsidis
Lang: en
Track: Large Hall
Finding and fixing memory safety bugs in C with ASANIn order to deliver an experience of programmer happiness to users of Ruby, the developers of CRuby itself (as well as the authors of extension gems) must cut through the dangerous jungle of manual memory management in C. Simple mistakes in the use of pointers, or failing to follow the Ruby garbage collector's rules precisely, can work fine on development machines but cause rare, hard to debug crashes in production environments. ASAN (Address SANitizer) is a tool for instrumenting compiled code to catch invalid memory accesses as they happen and crash the program immediately, leading you straight to the buggy code. This is far easier to troubleshoot than crashing at some later point, when the memory corruption has caused some other, totally innocent code to crash! In this talk, you'll learn how to enable ASAN in your builds (of both Ruby itself and of extension gems), and how to interpret its output. We'll also cover a little bit about how ASAN works in CRuby. Memo |
07:00 - 07:30 UTC |
Vinicius Stock
Lang: en
Track: Large Studio
The state of Ruby dev toolingDuring the last few years, the Ruby community invested significant effort into improving developer tooling. A lot of this effort has been divergent; trying out many solutions to find out what works best and fits Rubyists expectations. So where are we at this point? How do we compare to other ecosystems? Is it time to converge, unite efforts and reduce fragmentation? And where are we going next? Let’s analyze the full picture of Ruby developer tooling and try to answer these questions together. Memo |
07:40 - 08:40 UTC |
|
01:10 - 02:20
CRuby committers on stage!
02:30 - 03:00
Have you enabled Ruby 3.3 YJIT? You’re using a much slower Ruby if you haven’t. YJIT makes Railsbench 1.7x faster. In production, YJIT presents a 17% speedup to millions of requests per second at Shopify. Why does YJIT make Ruby so much faster? In this talk, you’ll explore the latest YJIT optimizations that have a huge impact on your application’s performance. Once you understand what you're missing out on, you can't help but enable YJIT.
04:30 - 05:00
The introduction of Object Shapes helped speed up cached instance variable reads as well as decreased the machine code required for JIT compilation. But what about cache misses? Is there any way we can speed up instance variable access in that case? Ruby 3.3 introduced a red-black tree cache to speed up instance variable cache misses. Let’s learn how instance variables are implemented, and how the red black tree cache speeds them up!
05:10 - 05:40
In parse.y, there is a variable that represents the state of the Lexer, based on the idea that the Parser and the Lexer can be separated. However, in reality, the state of Parser and Lexer are shared and cannot be said to be separated. Also, the Lexer state is manually managed, but historical history has clouded the view, and additions and modifications must be done with care. With the replacement of the parser generator from Bison to Lrama, the time has come to attack parse.y from the parser generator side, and Lrama is trying to solve this problem by generating an algorithmic parser called PSLR. As a first step, I will show how Lrama can generate a parser for a new algorithm called IELR, which is a prerequisite for PSLR. IELR is an improved version of LALR and can parse grammars that LALR cannot. In this presentation, I will explain the implementation of Lrama and how the parser is actually generated.
05:50 - 06:20
In order to deliver an experience of programmer happiness to users of Ruby, the developers of CRuby itself (as well as the authors of extension gems) must cut through the dangerous jungle of manual memory management in C. Simple mistakes in the use of pointers, or failing to follow the Ruby garbage collector's rules precisely, can work fine on development machines but cause rare, hard to debug crashes in production environments. ASAN (Address SANitizer) is a tool for instrumenting compiled code to catch invalid memory accesses as they happen and crash the program immediately, leading you straight to the buggy code. This is far easier to troubleshoot than crashing at some later point, when the memory corruption has caused some other, totally innocent code to crash! In this talk, you'll learn how to enable ASAN in your builds (of both Ruby itself and of extension gems), and how to interpret its output. We'll also cover a little bit about how ASAN works in CRuby.
07:00 - 07:30
During the last few years, the Ruby community invested significant effort into improving developer tooling. A lot of this effort has been divergent; trying out many solutions to find out what works best and fits Rubyists expectations. So where are we at this point? How do we compare to other ecosystems? Is it time to converge, unite efforts and reduce fragmentation? And where are we going next? Let’s analyze the full picture of Ruby developer tooling and try to answer these questions together.
07:40 - 08:40