[Gemma_1]Basics_with_HF.ipynb |
Load, run, finetune and deploy Gemma using Hugging Face. |
[Gemma_1]Common_use_cases.ipynb |
Illustrate some common use cases for Gemma. |
[Gemma_1]Inference with Flax/NNX |
Gemma 1 inference with Flax/NNX framework (linking to Flax documentation) |
[Gemma_1]Inference_on_TPU.ipynb |
Basic inference of Gemma with JAX/Flax on TPU. |
[Gemma_1]Using_with_Ollama.ipynb |
Run Gemma models using Ollama. |
[Gemma_1]Using_with_OneTwo.ipynb |
Integrate Gemma with Google OneTwo. |
[Gemma_1]data_parallel_inference_in_jax_tpu.ipynb |
Parallel inference of Gemma with JAX/Flax on TPU. |
[Gemma_2]Constrained_generation.ipynb |
Constrained generation with Gemma models using LlamaCpp and Guidance. |
[Gemma_2]Deploy_in_Vertex_AI.ipynb |
Deploy a Gemma model using Vertex AI. |
[Gemma_2]Deploy_with_vLLM.ipynb |
Deploy a Gemma model using vLLM. |
[Gemma_2]Game_Design_Brainstorming.ipynb |
Use Gemma to brainstorm ideas during game design using Keras. |
[Gemma_2]Gradio_Chatbot.ipynb |
Building a Chatbot with Gemma and Gradio |
[Gemma_2]Guess_the_word.ipynb |
Play a word guessing game with Gemma using Keras. |
[Gemma_2]Keras_Quickstart.ipynb |
Gemma 2 pre-trained 9B model quickstart tutorial with Keras. |
[Gemma_2]Keras_Quickstart_Chat.ipynb |
Gemma 2 instruction-tuned 9B model quickstart tutorial with Keras. Referenced in this blog. |
[Gemma_2]Synthetic_data_generation.ipynb |
Synthetic data generation with Gemma 2 |
[Gemma_2]Using_Gemini_and_Gemma_with_RouteLLM.ipynb |
Route Gemma and Gemini models using RouteLLM. |
[Gemma_2]Using_with_LLM_Comparator.ipynb |
Compare Gemma with another LLM using LLM Comparator. |
[Gemma_2]Using_with_Langfun_and_LlamaCpp.ipynb |
Leverage Langfun to seamlessly integrate natural language with programming using Gemma 2 and LlamaCpp. |
[Gemma_2]Using_with_Langfun_and_LlamaCpp_Python_Bindings.ipynb |
Leverage Langfun for smooth language-program interaction with Gemma 2 and llama-cpp-python. |
[Gemma_2]Using_with_LlamaCpp.ipynb |
Run Gemma models using LlamaCpp. |
[Gemma_2]Using_with_Llamafile.ipynb |
Run Gemma models using Llamafile. |
[Gemma_2]Using_with_LocalGemma.ipynb |
Run Gemma models using Local Gemma. |
[Gemma_2]Using_with_Mesop.ipynb |
Integrate Gemma with Google Mesop. |
[Gemma_2]Using_with_Ollama_Python.ipynb |
Run Gemma models using Ollama Python library. |
[Gemma_2]Using_with_SGLang.ipynb |
Run Gemma models using SGLang. |
[Gemma_2]Using_with_Xinference.ipynb |
Run Gemma models using Xinference. |
[Gemma_2]Using_with_mistral_rs.ipynb |
Run Gemma models using mistral.rs. |
[Gemma_2]for_Japan_using_Transformers_and_PyTorch.ipynb |
Gemma 2 for Japan |
[Gemma_2]on_Groq.ipynb |
Leverage the free Gemma 2 9B IT model hosted on Groq (super fast speed). |