Skip to content
View liangfu's full-sized avatar

Organizations

@apache @awslabs @aws-samples @dmlc @brainnetome

Block or report liangfu

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. apache/tvm Public

    Open deep learning compiler stack for cpu, gpu and specialized accelerators

    Python 12.2k 3.6k

  2. mxnet-mobilenet-v2 Public

    Reproduction of MobileNetV2 using MXNet

    Python 128 20

  3. vllm-project/vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 44.2k 6.8k

  4. aws-neuron/transformers-neuronx Public

    Python 104 28

75 contributions in the last year

Contribution Graph
Day of Week April May June July August September October November December January February March
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Activity overview

Contributed to vllm-project/vllm, aws-neuron/transformers-neuronx, awslabs/slapo and 8 other repositories
Loading A graph representing liangfu's contributions from April 07, 2024 to April 11, 2025. The contributions are 42% code review, 27% pull requests, 27% commits, 4% issues.

Contribution activity

April 2025

Created 2 commits in 1 repository

Created a pull request in vllm-project/vllm that received 7 comments

[Neuron][kernel] Fuse kv cache into a single tensor

Fusing KV cache into a single tensor can help eliminate unnecessary slice operator on K/V cache tensor. %p11.224 = bf16[2,17487,4,32,64]{4,3,2,1,0}…

+46 −56 lines changed 7 comments
Reviewed 3 pull requests in 1 repository
vllm-project/vllm 3 pull requests
Loading