-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agent to replicate markets to Omen #61
Conversation
Warning Rate Limit Exceeded@kongzii has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 16 minutes and 22 seconds before requesting another review. How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. WalkthroughThis update implements significant enhancements across the Prediction Market Agent Tooling suite, focusing on integrating and improving the deployment, benchmarking, and market replication functionalities. Key changes include the introduction of enums and new parameters for better configuration and deployment flexibility, enhanced market categorization and predictability checks, and streamlined processes for replicating markets to the Omen platform. These improvements aim to expand and refine the tooling's capabilities for prediction market analysis and operation. Changes
Related issues
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files ignored due to path filters (2)
poetry.lock
is excluded by:!**/*.lock
pyproject.toml
is excluded by:!**/*.toml
Files selected for processing (16)
- examples/cloud_deployment/gcp/deploy.py (3 hunks)
- prediction_market_agent_tooling/benchmark/utils.py (6 hunks)
- prediction_market_agent_tooling/config.py (3 hunks)
- prediction_market_agent_tooling/gtypes.py (2 hunks)
- prediction_market_agent_tooling/markets/categorize.py (1 hunks)
- prediction_market_agent_tooling/markets/manifold/manifold.py (1 hunks)
- prediction_market_agent_tooling/markets/omen/data_models.py (3 hunks)
- prediction_market_agent_tooling/markets/omen/omen.py (20 hunks)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py (1 hunks)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py (1 hunks)
- prediction_market_agent_tooling/tools/gnosis_rpc.py (1 hunks)
- prediction_market_agent_tooling/tools/web3_utils.py (3 hunks)
- scripts/bet_omen.py (3 hunks)
- scripts/create_market_omen.py (2 hunks)
- scripts/replicate_to_omen.py (1 hunks)
- tests/test_benchmark.py (6 hunks)
Additional comments: 14
prediction_market_agent_tooling/tools/gnosis_rpc.py (1)
- 6-6: The update to
GNOSIS_RPC_URL
aligns with the PR objectives to change the default Gnosis RPC URL to a new value. This change is straightforward and correctly implemented.prediction_market_agent_tooling/markets/omen/data_models.py (3)
- 46-47: The addition of
category
andcreationTimestamp
fields to theOmenMarket
class enhances the model by providing more detailed information about the market. Ensure that these new fields are properly documented and validated where necessary to maintain data integrity.- 56-62: The introduction of
question
andcreation_datetime
properties in theOmenMarket
class is a good practice, as it encapsulates the logic for deriving these values from existing fields (title
andcreationTimestamp
, respectively). This approach improves code readability and maintainability.- 189-189: Updating the
created_time
assignment in theto_generic_resolved_bet
method to use the newcreation_datetime
property is a logical change that ensures consistency in how creation times are handled within the class. This change also leverages the newly addedcreation_datetime
property, which is a positive improvement.tests/test_benchmark.py (1)
- 157-158: The addition of the
category
parameter and theclose_time
calculation in the test functions is a necessary update to reflect the changes made to the market data models. It's important to ensure that thecategory
parameter is used appropriately in the tests and that theclose_time
calculation accurately represents the intended behavior in real scenarios. Consider adding more varied test cases to cover different categories and close times to ensure comprehensive testing.Also applies to: 182-183, 195-196, 209-210, 223-224, 236-237
prediction_market_agent_tooling/benchmark/utils.py (4)
- 23-23: Adding
closing_this_month
to theMarketFilter
enum is a useful enhancement for filtering markets based on their closing time. Ensure that this new filter option is properly handled in all relevant functions to maintain consistent behavior across the tooling.- 46-51: The addition of
category
andclose_time
fields to theMarket
class, along with the validation forclose_time
, improves the model by providing more detailed information about the markets and ensuring data integrity. It's important to document these changes and consider adding validation for thecategory
field if there are specific constraints on its values.Also applies to: 69-73
- 217-221: The updates to the
get_manifold_markets
andget_polymarket_markets
functions to map and include the newcategory
andclose_time
fields from JSON are necessary to align with the changes in theMarket
class. Ensure that the mapping and data conversion are correctly implemented and consider edge cases where the JSON data might be missing or malformed.Also applies to: 346-352
- 380-381: The handling of the
closing_this_month
filter in theget_markets
function by raising aValueError
for Polymarket indicates that this filter option is not supported for Polymarket sources. This decision should be clearly documented, and alternative approaches should be considered if the functionality becomes available or necessary in the future.prediction_market_agent_tooling/markets/omen/omen.py (5)
- 44-46: The
place_bet
function has been refactored to directly use values withoutcheck_not_none
. Ensure that the values being used (amount_xdai
,from_address
,from_private_key
) are validated or sanitized elsewhere in the code to prevent potential issues with null or invalid values.- 220-250: The GraphQL query construction for
getFixedProductMarketMakers
has been updated to include acreator
parameter conditionally. This is a good practice for making the query more flexible. However, ensure that thecreator
parameter is properly sanitized before being used in the query to prevent potential security vulnerabilities such as injection attacks.- 263-288: The
get_omen_markets
andget_omen_binary_markets
functions have been adjusted to include acreator
parameter. This enhancement allows for more targeted queries. It's important to validate thecreator
parameter to ensure it conforms to expected formats (e.g., HexAddress) and does not introduce security risks.- 900-914: > 📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [909-920]
Transaction functions such as
omen_approve_market_maker_to_spend_collateral_token_tx
,omen_deposit_collateral_token_tx
,omen_realitio_ask_question_tx
, andomen_prepare_condition_tx
have been updated to remove nonce handling and usefrom_address
directly. Removing nonce handling simplifies the transaction process but ensure that this change does not introduce issues with transaction ordering or replay attacks. Usingfrom_address
directly is straightforward but validate that it is always provided in the correct format and is not susceptible to misuse.Also applies to: 933-950
- 215-253: > 📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [207-244]
The GraphQL query
_QUERY_GET_SINGLE_FIXED_PRODUCT_MARKET_MAKER
has been updated to include acategory
field. This is a positive change for enhancing market data retrieval. Ensure that the backend GraphQL schema supports this field and that it is properly indexed for efficient querying.
import requests | ||
from web3.types import Wei | ||
|
||
GNOSIS_RPC_URL = os.getenv( | ||
"GNOSIS_RPC_URL", "https://lb.nodies.app/v1/406d8dcc043f4cb3959ed7d6673d311a" | ||
) | ||
GNOSIS_RPC_URL = os.getenv("GNOSIS_RPC_URL", "https://gnosis-rpc.publicnode.com") | ||
|
||
|
||
def get_balance(address: str) -> Wei: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [7-17]
The get_balance
function correctly implements the logic to fetch the balance for a given address using the updated GNOSIS_RPC_URL. However, it's important to handle potential errors from the requests.post
call and the JSON parsing operation. Consider adding error handling to manage scenarios where the request fails or the response format is unexpected.
def get_balance(address: str) -> Wei:
try:
response = requests.post(
GNOSIS_RPC_URL,
json={
"jsonrpc": "2.0",
"method": "eth_getBalance",
"params": [address, "latest"],
"id": 1,
},
headers={"content-type": "application/json"},
)
response.raise_for_status() # Raises HTTPError for bad responses
data = response.json()
balance = Wei(int(data["result"], 16)) # Convert hex value to int.
return balance
except requests.RequestException as e:
# Handle network-related errors here
raise ConnectionError(f"Failed to fetch balance: {e}")
except ValueError as e:
# Handle JSON decoding errors here
raise ValueError(f"Invalid response format: {e}")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (4)
- prediction_market_agent_tooling/markets/omen/omen.py (20 hunks)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py (1 hunks)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py (1 hunks)
- scripts/replicate_to_omen.py (1 hunks)
Files skipped from review as they are similar to previous changes (4)
- prediction_market_agent_tooling/markets/omen/omen.py
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py
- scripts/replicate_to_omen.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files selected for processing (2)
- prediction_market_agent_tooling/benchmark/utils.py (6 hunks)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py
Additional comments: 5
prediction_market_agent_tooling/benchmark/utils.py (5)
- 23-23: Added a new enum value
closing_this_month
toMarketFilter
. This addition seems to align with the objectives of enhancing market categorization and filtering capabilities. Ensure that the corresponding logic to handle this new filter option is implemented whereverMarketFilter
is used.- 51-51: Added a new field
close_time
to theMarket
class. This field is essential for filtering markets based on their closing time. The use ofdatetime
type ensures that time comparisons can be accurately performed. However, ensure that all market sources provide aclose_time
in a consistent format to avoid parsing errors.- 69-73: Added validation for the
close_time
field in theMarket
class. This validation ensures that theclose_time
always has timezone information, defaulting to UTC if none is provided. This is a good practice for handling datetime objects, especially when dealing with data from multiple time zones. Consider adding similar validation for other datetime fields if not already present.- 217-221: Updated the
get_manifold_markets
function to map thecloseTime
field from the Manifold API response to theclose_time
field in theMarket
class. This change is necessary to support the newclose_time
field. Ensure that the Manifold API consistently provides thecloseTime
field in the expected format to avoid parsing errors.- 354-360: Updated the
get_polymarket_markets
function to includecategory
andclose_time
fields from the JSON response. This update is crucial for integrating the new fields into theMarket
class. Ensure that the Polymarket API provides these fields in a consistent and reliable manner. Additionally, validate the date format used forclose_time
to ensure compatibility with thedatetime.strptime
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (5)
- examples/cloud_deployment/gcp/deploy.py (3 hunks)
- prediction_market_agent_tooling/deploy/agent.py (2 hunks)
- prediction_market_agent_tooling/deploy/gcp/deploy.py (2 hunks)
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py (1 hunks)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py (1 hunks)
Files skipped from review as they are similar to previous changes (3)
- examples/cloud_deployment/gcp/deploy.py
- prediction_market_agent_tooling/markets/omen/replicate/agent_example.py
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py
Additional comments: 4
prediction_market_agent_tooling/deploy/gcp/deploy.py (2)
- 32-32: The addition of the
timeout
parameter to thedeploy_to_gcp
function signature is a positive change, as it allows for customizable function execution timeouts. This is particularly useful for tailoring deployments based on the expected execution time of different functions.- 73-73: The use of the
timeout
parameter in thegcloud_deploy_cmd
function call is correct and ensures that the deployment command respects the specified timeout. This is a good practice for enhancing the flexibility and reliability of deployments.prediction_market_agent_tooling/deploy/agent.py (2)
- 70-70: The addition of the
timeout
parameter with a default value of 180 to thedeploy_gcp
function is a sensible choice. It provides a reasonable default timeout for function deployments, which can be overridden if necessary. This enhances the flexibility and usability of the deployment process.- 102-102: Passing the
timeout
parameter to thedeploy_to_gcp
function call is correctly implemented. This ensures that the customizable timeout value is utilized during the deployment process, aligning with the changes made indeploy.py
.
@@ -69,6 +70,7 @@ def deploy_to_gcp( | |||
env_vars=env_vars, | |||
secrets=secrets, | |||
memory=memory, | |||
timeout=timeout, | |||
) | |||
try: | |||
subprocess.run(cmd, shell=True, check=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of subprocess.run
with shell=True
is flagged as a security risk by static analysis tools. This is because it can make the application vulnerable to shell injection attacks if user input is not properly sanitized. Consider using shell=False
and providing the command as a list of arguments to mitigate this risk.
- subprocess.run(cmd, shell=True, check=True)
+ subprocess.run(cmd.split(), shell=False, check=True)
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
subprocess.run(cmd, shell=True, check=True) | |
subprocess.run(cmd.split(), shell=False, check=True) |
@@ -44,7 +44,7 @@ def from_data_model(model: ManifoldMarket) -> "ManifoldAgentMarket": | |||
id=model.id, | |||
question=model.question, | |||
outcomes=model.outcomes, | |||
p_yes=Probability(model.pool.YES), | |||
p_yes=Probability(model.probability), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix :O Before I forgot that pool.YES actually isn't probability.
|
||
if not include_creator: | ||
# If we aren't filtering by query, we need to remove it from where, otherwise "creator: null" will return 0 results. | ||
query = query.replace("creator: $creator,", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know about some nicer way to do this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it's annoying, but can't think of a better way!
# Get the current nonce for the given from_address. | ||
# If making multiple transactions quickly after each other, | ||
# it's better to increae it manually (otherwise we could get stale value from the network and error out). | ||
nonce: Nonce = web3.eth.get_transaction_count(from_address_checksummed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the manual nonce creation, and instead of that, I added retries on the transaction function. So if it fails because of invalid nonce, it will wait a bit, network will update and then it will fetch a new nonce. Because for some reason, this manual nonce increasing didn't work correctly when I created N markets in a row.
GNOSIS_RPC_URL = os.getenv( | ||
"GNOSIS_RPC_URL", "https://lb.nodies.app/v1/406d8dcc043f4cb3959ed7d6673d311a" | ||
) | ||
GNOSIS_RPC_URL = os.getenv("GNOSIS_RPC_URL", "https://gnosis-rpc.publicnode.com") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the hunt for the most stable RPC!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py
T = TypeVar("T", bound=Callable[..., Any]) | ||
|
||
|
||
def persistent_inmemory_cache(func: T) -> T: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: Remove it from EVO and use it from here.
|
||
|
||
@persistent_inmemory_cache | ||
def is_predictable( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: Remove it from EVO and use it from here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 4
Configuration used: CodeRabbit UI
Files selected for processing (4)
- prediction_market_agent_tooling/config.py (3 hunks)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py (1 hunks)
- prediction_market_agent_tooling/tools/cache.py (1 hunks)
- prediction_market_agent_tooling/tools/is_predictable.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/markets/omen/replicate/replicate.py
Additional comments: 1
prediction_market_agent_tooling/config.py (1)
- 16-22: The introduction of
SecretStr
for handling sensitive API keys (MANIFOLD_API_KEY
,OPENAI_API_KEY
) and the new attributes (ENABLE_CACHE
,CACHE_DIR
) are significant improvements for security and functionality. Ensure that all instances where these API keys are used have been updated to handle theSecretStr
type correctly, particularly where the API keys are passed to external libraries or functions.
|
||
from prediction_market_agent_tooling.config import APIKeys | ||
|
||
MEMORY = Memory(APIKeys().CACHE_DIR, verbose=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of APIKeys().CACHE_DIR
for initializing MEMORY
directly in the global scope could lead to potential issues if the APIKeys
class fails to load or if the .env
file is missing or incorrectly configured. Consider adding error handling or a fallback mechanism to ensure robustness.
def persistent_inmemory_cache(func: T) -> T: | ||
""" | ||
Wraps a function with both file cache (for persistent cache) and in-memory cache (for speed). | ||
""" | ||
return cast(T, cache(MEMORY.cache(func)) if APIKeys().ENABLE_CACHE else func) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The persistent_inmemory_cache
decorator is a creative approach to combine file-based and in-memory caching. However, ensure that the APIKeys().ENABLE_CACHE
check does not introduce significant overhead by repeatedly instantiating APIKeys
objects. It might be more efficient to store the result of APIKeys().ENABLE_CACHE
in a variable outside the decorator function.
self.BET_FROM_PRIVATE_KEY, | ||
"BET_FROM_PRIVATE_KEY missing in the environment.", | ||
) | ||
|
||
@property | ||
def openai_api_key(self) -> SecretStr: | ||
return check_not_none( | ||
self.OPENAI_API_KEY, "OPENAI_API_KEY missing in the environment." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [25-50]
The use of check_not_none
in the property methods for retrieving API keys and other configurations is a good practice for ensuring that necessary environment variables are set. However, consider the impact of potentially raising exceptions at runtime if these environment variables are missing. It might be beneficial to provide more detailed error messages or documentation on how to set these variables correctly.
@persistent_inmemory_cache | ||
def is_predictable( | ||
question: str, | ||
engine: str = "gpt-4-1106-preview", | ||
prompt_template: str = QUESTION_EVALUATE_PROMPT, | ||
) -> bool: | ||
""" | ||
Evaluate if the question is actually answerable. | ||
""" | ||
llm = ChatOpenAI(model=engine, temperature=0.0) | ||
|
||
prompt = ChatPromptTemplate.from_template(template=prompt_template) | ||
messages = prompt.format_messages(question=question) | ||
completion = llm(messages, max_tokens=256).content | ||
|
||
if "yes" in completion.lower(): | ||
is_predictable = True | ||
elif "no" in completion.lower(): | ||
is_predictable = False | ||
else: | ||
raise ValueError(f"Error in evaluate_question for `{question}`: {completion}") | ||
|
||
return is_predictable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The is_predictable
function introduces an innovative use of GPT models to evaluate the predictability of questions. However, there are a few considerations:
- Ensure that the
engine
parameter is configurable and documented, as different models may yield different results. - The hardcoded
max_tokens=256
might not be suitable for all questions. Consider making this configurable or ensuring it's sufficient for the intended use cases. - The error handling for unexpected responses (
else
block in lines 45-46) is a good practice, but consider logging the error for easier debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (3)
- examples/cloud_deployment/gcp/deploy.py (1 hunks)
- prediction_market_agent_tooling/config.py (2 hunks)
- prediction_market_agent_tooling/tools/is_predictable.py (1 hunks)
Files skipped from review as they are similar to previous changes (2)
- prediction_market_agent_tooling/config.py
- prediction_market_agent_tooling/tools/is_predictable.py
Additional comments: 5
examples/cloud_deployment/gcp/deploy.py (5)
- 2-3: The addition of
json
andEnum
imports is appropriate for the new functionality introduced in this file. Thejson
module is used for parsing environment variables and secrets, andEnum
is used for defining theAgentName
enum, which is a clean way to handle agent selection.- 22-25: The
AgentName
enum is a good practice for defining constants that represent the agent names. This approach enhances code readability and maintainability by using meaningful names instead of string literals throughout the code.- 29-41: The addition of new parameters (
manifold_api_key_secret_name
,openai_api_key_secret_name
,bet_from_address
,bet_from_private_key_secret_name
,env_vars
,secrets
, andtimeout
) to themain
function significantly enhances the flexibility and configurability of the deployment process. It's important to ensure that these parameters are documented and validated appropriately to prevent misconfiguration.- 44-46: The update to the agent selection logic using the
AgentName
enum is a clean and maintainable approach. It simplifies the process of adding or modifying agents in the future.- 83-84: The addition of a
timeout
parameter for deployments is a crucial update that allows for better control over deployment operations, especially for agents that may require longer execution times. This change should be clearly documented to guide users on how to set appropriate timeout values based on their deployment needs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (2)
- prediction_market_agent_tooling/tools/is_predictable.py (1 hunks)
- tests/tools/test_is_predictable.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/tools/is_predictable.py
Additional comments: 1
tests/tools/test_is_predictable.py (1)
- 7-7: The use of
RUN_PAID_TESTS
to conditionally skip tests based on external factors (e.g., cost) is a good practice. It allows for flexibility in test execution without removing the tests from the codebase. This approach ensures that critical tests that might incur costs can be selectively run when necessary.
@pytest.mark.parametrize( | ||
"question, answerable", | ||
[ | ||
("Russian nuke in space?", False), | ||
( | ||
"Russian nuke in space by March 31?", | ||
True, | ||
), | ||
( | ||
"Russian nuke in space in 2024?", | ||
True, | ||
), | ||
( | ||
"Will there be an AI language model that surpasses ChatGPT and other OpenAI models before the end of 2024?", | ||
True, | ||
), | ||
("Will Vladimir Putin be the President of Russia at the end of 2024?", True), | ||
( | ||
"This market resolves YES when an artificial agent is appointed to the board of directors of a S&P500 company, meanwhile every day I will bet M25 in NO.", | ||
False, | ||
), | ||
( | ||
"Will there be a >0 value liquidity event for me, a former Consensys Software Inc. employee, on my shares of the company?", | ||
False, | ||
), | ||
("Will this market have an odd number of traders by the end of 2024?", False), | ||
("Did COVID-19 come from a laboratory?", False), | ||
], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parameterized test cases cover a variety of scenarios, which is excellent for ensuring the is_predictable
function can handle different types of questions. However, it's important to ensure that the test cases remain relevant and up-to-date with the types of questions the system is expected to evaluate. Regularly reviewing and updating these test cases will help maintain the effectiveness of the tests.
Consider adding more diverse test cases or scenarios that might emerge as the system evolves. This could include questions with more nuanced language or different structures to ensure the is_predictable
function remains robust against a wide range of inputs.
def test_evaluate_question(question: str, answerable: bool) -> None: | ||
assert ( | ||
is_predictable(question=question) == answerable | ||
), f"Question is not evaluated correctly, see the completion: {is_predictable}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test function test_evaluate_question
correctly asserts the outcome of the is_predictable
function against expected values. The use of a descriptive error message in the assertion is a good practice, as it provides immediate context on test failure. However, the error message could be made more informative by including both the expected and actual outcomes.
Improve the assertion error message to include both the expected and actual outcomes. This will provide more immediate context when diagnosing test failures.
- ), f"Question is not evaluated correctly, see the completion: {is_predictable}"
+ ), f"Expected {answerable} but got {not answerable} for question: {question}"
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def test_evaluate_question(question: str, answerable: bool) -> None: | |
assert ( | |
is_predictable(question=question) == answerable | |
), f"Question is not evaluated correctly, see the completion: {is_predictable}" | |
def test_evaluate_question(question: str, answerable: bool) -> None: | |
assert ( | |
is_predictable(question=question) == answerable | |
), f"Expected {answerable} but got {not answerable} for question: {question}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files selected for processing (2)
- prediction_market_agent_tooling/benchmark/utils.py (6 hunks)
- prediction_market_agent_tooling/monitor/monitor.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/monitor/monitor.py
Additional comments: 5
prediction_market_agent_tooling/benchmark/utils.py (5)
- 25-25: The addition of
closing_this_month
toMarketFilter
enum is a useful enhancement for filtering markets based on their closing time within the current month. Ensure that the logic for determining if a market is closing this month is accurately implemented in the functions that utilize this filter.- 53-53: The addition of a
close_time
field to theMarket
class, along with its validation, is crucial for accurately tracking and filtering markets based on their closing times. Ensure that the datetime values assigned to this field are timezone-aware to avoid potential issues with time comparisons.- 65-70: The implementation of timezone validation for
created_time
andclose_time
fields in theMarket
class usingfield_validator
is a good practice to ensure consistency in datetime values. This approach helps in avoiding common pitfalls related to timezone handling.- 214-218: The mapping of JSON fields to
Market
fields in theget_manifold_markets
function is correctly updated to include thecloseTime
field. This change is necessary for the newclose_time
field in theMarket
class to function properly. Ensure that the data from the Manifold API matches the expected format for this field.- 351-357: The update to the
get_polymarket_markets
function to includecategory
andclose_time
fields from JSON is a necessary change to support the new fields in theMarket
class. Ensure that the data from the Polymarket API matches the expected formats for these fields, especially theclose_time
which requires parsing from an ISO string.
@@ -44,10 +45,12 @@ class CancelableMarketResolution(str, Enum): | |||
class Market(BaseModel): | |||
source: MarketSource | |||
question: str | |||
category: str | None = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding an optional category
field to the Market
class enhances market categorization capabilities. It's important to ensure that the values assigned to this field are consistent and well-documented, especially since they seem to be derived from external data sources.
Would you like me to add documentation for the category
field?
# On Polymarket, there are markets that are actually a group of multiple Yes/No markets, for example https://polymarket.com/event/presidential-election-winner-2024. | ||
# But API returns them individually, and then we receive questions such as "Will any other Republican Politician win the 2024 US Presidential Election?", | ||
# which are naturally unpredictable without futher details. | ||
# Also, URLs constructed for them with the logic below don't work. | ||
# This is a heuristic to filter them out. | ||
if len(m_json["events"]) > 1 or m_json["events"][0]["slug"] != m_json["slug"]: | ||
continue | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The heuristic added to the get_polymarket_markets
function to filter out markets that are part of a group of multiple Yes/No markets is thoughtful. However, documenting the rationale behind this heuristic within the code would help future maintainers understand its purpose and potential limitations.
Consider adding more detailed comments explaining the rationale behind this heuristic.
if filter_ == MarketFilter.closing_this_month: | ||
raise ValueError(f"Polymarket doesn't support filtering by closing soon.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The validation for filtering by closing_this_month
in the get_markets
function, specifically raising a ValueError
if this filter is applied to Polymarket markets, is a clear way to communicate the limitation of the Polymarket API. If this feature is critical, consider implementing a workaround or manually calculating which markets are closing this month.
Consider adding a workaround for the limitation of the Polymarket API regarding the closing_this_month
filter.
Evan/replication
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 4
Configuration used: CodeRabbit UI
Files selected for processing (7)
- .github/actions/python_prepare/action.yaml (1 hunks)
- examples/cloud_deployment/gcp/deploy.py (1 hunks)
- examples/replicate_markets/agent_example.py (1 hunks)
- examples/replicate_markets/deploy.py (1 hunks)
- prediction_market_agent_tooling/deploy/agent.py (2 hunks)
- prediction_market_agent_tooling/markets/omen/omen.py (21 hunks)
- scripts/replicate_to_omen.py (1 hunks)
Files skipped from review as they are similar to previous changes (2)
- prediction_market_agent_tooling/deploy/agent.py
- scripts/replicate_to_omen.py
Additional comments: 7
.github/actions/python_prepare/action.yaml (1)
- 15-15: The addition of the
--all-extras
flag to thepoetry install
command ensures that all optional dependencies specified in the project'spyproject.toml
file are installed. This change can significantly improve the setup process by ensuring that all necessary dependencies for various features and tests are installed. However, it's important to consider the potential impact on the installation time and the size of the environment, as installing all extras might introduce dependencies that are not needed for certain workflows or deployments.prediction_market_agent_tooling/markets/omen/omen.py (6)
- 2-2: The addition of
timedelta
from thedatetime
module is appropriate for operations involving time calculations. This import is necessary for the new functionalities introduced in this PR, such as handling market closing times.- 5-5: The import of
pytz
is justified as it is used to handle timezone information, which is crucial for accurately scheduling and managing market closing times in a global context.- 7-12: The imports from
prediction_market_agent_tooling.benchmark.utils
are correctly added to support the new functionalities related to market analysis and replication. These imports are essential for filtering, sorting, and sourcing markets, as well as fetching market data.- 231-232: The
construct_query_get_fixed_product_markets_makers
function has been updated to include acreator
parameter, allowing for more granular querying of markets based on the creator. This is a useful enhancement for filtering markets, especially when replicating markets from specific creators.- 274-276: The addition of the
creator
parameter to theget_omen_markets
function signature is a logical extension of the previous change, enabling the fetching of markets based on the creator. This aligns with the PR's objectives of enhancing market analysis and replication capabilities.- 864-877: The
omen_create_market_deposit_tx
function is a new addition that encapsulates the process of depositing initial funds into a market. This function simplifies the market creation process by abstracting away the details of the deposit transaction. It's a valuable addition for improving the modularity and readability of the market creation workflow.
def omen_replicate_from_tx( | ||
market_source: MarketSource, | ||
n_to_replicate: int, | ||
initial_funds: xDai, | ||
from_address: ChecksumAddress, | ||
from_private_key: PrivateKey, | ||
last_n_omen_markets_to_fetch: int = 1000, | ||
close_time_before: datetime | None = None, | ||
auto_deposit: bool = False, | ||
) -> list[ChecksumAddress]: | ||
already_created_markets = get_omen_binary_markets( | ||
limit=last_n_omen_markets_to_fetch, | ||
creator=from_address, | ||
) | ||
if len(already_created_markets) == last_n_omen_markets_to_fetch: | ||
raise ValueError( | ||
"TODO: Switch to paged version (once available) to fetch all markets, we don't know if we aren't creating duplicate now." | ||
) | ||
|
||
markets = get_markets( | ||
100, | ||
market_source, | ||
filter_=( | ||
MarketFilter.closing_this_month | ||
if market_source == MarketSource.MANIFOLD | ||
else MarketFilter.open | ||
), | ||
sort=MarketSort.newest if market_source == MarketSource.MANIFOLD else None, | ||
excluded_questions=set(m.question for m in already_created_markets), | ||
) | ||
markets_sorted = sorted( | ||
markets, | ||
key=lambda m: m.volume, | ||
reverse=True, | ||
) | ||
markets_to_replicate = [ | ||
m | ||
for m in markets_sorted | ||
if close_time_before is None or m.close_time <= close_time_before | ||
][:n_to_replicate] | ||
if not markets_to_replicate: | ||
print(f"No markets found for {market_source}") | ||
return [] | ||
|
||
print(f"Found {len(markets_to_replicate)} markets to replicate.") | ||
|
||
# Get a set of possible categories from existing markets (but created by anyone, not just your agent) | ||
existing_categories = set( | ||
m.category | ||
for m in get_omen_binary_markets( | ||
limit=last_n_omen_markets_to_fetch, | ||
) | ||
) | ||
|
||
created_addresses: list[ChecksumAddress] = [] | ||
|
||
for market in markets_to_replicate: | ||
if not is_predictable(market.question): | ||
print( | ||
f"Skipping `{market.question}` because it seems to not be predictable." | ||
) | ||
continue | ||
# Close a day sooner than the original market, because of timezone differences. | ||
closing_time = market.close_time - timedelta(hours=24) | ||
# Force at least 24 hours of open market. | ||
soonest_allowed_closing_time = datetime.utcnow().replace( | ||
tzinfo=pytz.UTC | ||
) + timedelta(hours=24) | ||
if closing_time <= soonest_allowed_closing_time: | ||
print( | ||
f"Skipping `{market.question}` because it closes sooner than {soonest_allowed_closing_time}." | ||
) | ||
continue | ||
category = infer_category(market.question, existing_categories) | ||
market_address = omen_create_market_tx( | ||
initial_funds=initial_funds, | ||
fee=OMEN_DEFAULT_MARKET_FEE, | ||
question=market.question, | ||
closing_time=closing_time, | ||
category=category, | ||
language="en", | ||
from_address=from_address, | ||
from_private_key=from_private_key, | ||
outcomes=[OMEN_TRUE_OUTCOME, OMEN_FALSE_OUTCOME], | ||
auto_deposit=auto_deposit, | ||
) | ||
print( | ||
f"Created `https://aiomen.eth.limo/#/{market_address}` for `{market.question}` in category {category} out of {market.url}." | ||
) | ||
|
||
return created_addresses |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The omen_replicate_from_tx
function introduces significant new functionality for replicating markets. It incorporates several best practices, such as sorting markets by volume and excluding non-predictable questions. However, there are a few areas for improvement:
-
The handling of timezone differences and ensuring a minimum open market duration are well-thought-out. However, the hardcoded 24-hour adjustment (
timedelta(hours=24)
) might benefit from being configurable to accommodate different scenarios or market rules. -
The use of
print
statements for logging is not ideal for production code. Consider using a logging framework that allows for different logging levels and better control over the logging output. -
The function's complexity is relatively high due to its length and the number of operations performed. Consider breaking it down into smaller, more focused functions to improve readability and maintainability.
# Suggestion for improvement 1: Make the adjustment hours configurable
def omen_replicate_from_tx(
...,
adjustment_hours: int = 24, # Add this parameter
...
):
...
# Use the parameter instead of the hardcoded value
closing_time = market.close_time - timedelta(hours=adjustment_hours)
...
# Suggestion for improvement 2: Use a logging framework
import logging
logger = logging.getLogger(__name__)
# Replace print statements with logging
logger.info(f"Found {len(markets_to_replicate)} markets to replicate.")
# Suggestion for improvement 3: Break down the function into smaller parts
def filter_markets(markets, close_time_before):
# Implement filtering logic here
...
def replicate_market(market, existing_categories, initial_funds, ...):
# Implement market replication logic here
...
examples/replicate_markets/deploy.py
Outdated
env_vars=None, # TODO add env vars | ||
secrets=None, # TODO add secrets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO
comments on lines 25 and 26 indicate that environment variables and secrets need to be added for the deployment. It's crucial to address these TODO
items before proceeding with the deployment to ensure that the deployed function has all the necessary configurations and access to secure resources it might need to operate correctly.
Would you like me to help with defining the environment variables and secrets or open a GitHub issue to track this task?
class DeployableReplicateToOmenAgent(DeployableAgent): | ||
def run( | ||
self, market_type: MarketType = MarketType.MANIFOLD, _place_bet: bool = True | ||
) -> None: | ||
keys = APIKeys() | ||
settings = ReplicateSettings() | ||
close_time_before = ( | ||
datetime.utcnow() + timedelta(days=settings.CLOSE_TIME_UP_TO_N_DAYS) | ||
).replace(tzinfo=pytz.UTC) | ||
initial_funds_per_market = xdai_type(settings.INITIAL_FUNDS) | ||
deposit_funds_per_replication = xdai_type( | ||
initial_funds_per_market * settings.N_TO_REPLICATE | ||
) | ||
|
||
print(f"Replicating from {MarketSource.MANIFOLD}.") | ||
# Deposit enough of xDai for all N markets to be replicated, so we don't re-deposit in case of re-tries. | ||
omen_create_market_deposit_tx( | ||
deposit_funds_per_replication, | ||
keys.bet_from_address, | ||
keys.bet_from_private_key, | ||
) | ||
omen_replicate_from_tx( | ||
market_source=MarketSource.MANIFOLD, | ||
n_to_replicate=settings.N_TO_REPLICATE, | ||
initial_funds=initial_funds_per_market, | ||
from_address=keys.bet_from_address, | ||
from_private_key=keys.bet_from_private_key, | ||
close_time_before=close_time_before, | ||
auto_deposit=False, | ||
) | ||
print(f"Replicating from {MarketSource.POLYMARKET}.") | ||
# Deposit enough of xDai for all N markets to be replicated, so we don't re-deposit in case of re-tries. | ||
omen_create_market_deposit_tx( | ||
deposit_funds_per_replication, | ||
keys.bet_from_address, | ||
keys.bet_from_private_key, | ||
) | ||
omen_replicate_from_tx( | ||
market_source=MarketSource.POLYMARKET, | ||
n_to_replicate=settings.N_TO_REPLICATE, | ||
initial_funds=initial_funds_per_market, | ||
from_address=keys.bet_from_address, | ||
from_private_key=keys.bet_from_private_key, | ||
close_time_before=close_time_before, | ||
auto_deposit=False, | ||
) | ||
print("Done.") | ||
|
||
|
||
@functions_framework.http | ||
def main(request: Request) -> str: | ||
DeployableReplicateToOmenAgent().run() | ||
return "Success" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DeployableReplicateToOmenAgent
class provides a comprehensive implementation for replicating markets to Omen from both Manifold and Polymarket sources. The use of the ReplicateSettings
class for configuration management and the structured approach to handling API keys and settings are commendable practices that enhance maintainability and readability.
However, there are a couple of areas that could be improved:
- Error Handling: Consider adding error handling around external calls, such as
omen_create_market_deposit_tx
andomen_replicate_from_tx
, to gracefully handle any failures or exceptions that might occur during the replication process. - Duplication: The code for depositing funds and replicating markets is duplicated for both Manifold and Polymarket sources. This could be refactored into a separate method to reduce duplication and improve code maintainability.
Consider refactoring the code to reduce duplication and add error handling around external calls. Here's a suggested approach for refactoring:
def replicate_markets(self, market_source):
print(f"Replicating from {market_source}.")
# Deposit enough of xDai for all N markets to be replicated, so we don't re-deposit in case of re-tries.
omen_create_market_deposit_tx(
self.deposit_funds_per_replication,
self.keys.bet_from_address,
self.keys.bet_from_private_key,
)
omen_replicate_from_tx(
market_source=market_source,
n_to_replicate=self.settings.N_TO_REPLICATE,
initial_funds=self.initial_funds_per_market,
from_address=self.keys.bet_from_address,
from_private_key=self.keys.bet_from_private_key,
close_time_before=self.close_time_before,
auto_deposit=False,
)
And then call this method for each market source in the run
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (1)
- prediction_market_agent_tooling/markets/omen/omen.py (21 hunks)
Additional comments: 2
prediction_market_agent_tooling/markets/omen/omen.py (2)
- 2-2: The added imports for
timedelta
,pytz
, and various utilities from within the project (MarketFilter
,MarketSort
,MarketSource
,get_markets
) are appropriate and necessary for the functionalities being introduced or modified in this file. Good job ensuring that all necessary dependencies are included.Also applies to: 5-5, 7-11
- 275-275: The update to the
get_omen_markets
function signature to include an optionalcreator
parameter is a thoughtful addition. It enables filtering markets by the creator's address, which can be particularly useful for specific analyses or operations. This change is backward compatible and adds valuable functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files ignored due to path filters (1)
pyproject.toml
is excluded by:!**/*.toml
Files selected for processing (1)
- prediction_market_agent_tooling/markets/omen/omen.py (21 hunks)
Additional comments: 4
prediction_market_agent_tooling/markets/omen/omen.py (4)
- 2-2: The addition of
timedelta
from thedatetime
module is appropriate for operations involving time calculations, which are common in market-related functionalities.- 5-5: The import of
pytz
is necessary for handling timezone-aware datetime objects, especially relevant for market closing times and other time-sensitive operations.- 7-11: The imports for
MarketFilter
,MarketSort
,MarketSource
, andget_markets
are correctly added to support market analysis and replication functionalities.- 232-232: The default value for the
creator
parameter in the GraphQL query construction functionconstruct_query_get_fixed_product_markets_makers
is set tonull
. This approach is acceptable as it allows for optional filtering by creator. However, ensure that the logic for removing thecreator
filter when not needed (line 259) is thoroughly tested to avoid unintended query results.
|
||
if not include_creator: | ||
# If we aren't filtering by query, we need to remove it from where, otherwise "creator: null" will return 0 results. | ||
query = query.replace("creator: $creator,", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method of removing the creator
filter from the GraphQL query by replacing "creator: $creator," with an empty string when include_creator
is False
is a pragmatic solution. However, consider adding a comment explaining why this is necessary for future maintainability.
# If include_creator is False, we remove the creator filter from the query to avoid filtering by a null creator, which would return no results.
query = query.replace("creator: $creator,", "")
def omen_create_market_deposit_tx( | ||
initial_funds: xDai, | ||
from_address: ChecksumAddress, | ||
from_private_key: PrivateKey, | ||
) -> TxReceipt: | ||
web3 = Web3(Web3.HTTPProvider(GNOSIS_RPC_URL)) | ||
return omen_deposit_collateral_token_tx( | ||
web3=web3, | ||
collateral_token_contract_address=DEFAULT_COLLATERAL_TOKEN_CONTRACT_ADDRESS, | ||
amount_wei=xdai_to_wei(initial_funds), | ||
from_address=from_address, | ||
from_private_key=from_private_key, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation of omen_create_market_deposit_tx
correctly handles the deposit transaction by converting initial_funds
from xDai
to Wei
and using the omen_deposit_collateral_token_tx
utility. As previously noted, consider adding error handling for the transaction call to improve robustness.
try:
# Existing call to omen_deposit_collateral_token_tx
except Exception as e:
# Handle or log the exception appropriately
def omen_replicate_from_tx( | ||
market_source: MarketSource, | ||
n_to_replicate: int, | ||
initial_funds: xDai, | ||
from_address: ChecksumAddress, | ||
from_private_key: PrivateKey, | ||
last_n_omen_markets_to_fetch: int = 1000, | ||
close_time_before: datetime | None = None, | ||
auto_deposit: bool = False, | ||
) -> list[ChecksumAddress]: | ||
already_created_markets = get_omen_binary_markets( | ||
limit=last_n_omen_markets_to_fetch, | ||
creator=from_address, | ||
) | ||
if len(already_created_markets) == last_n_omen_markets_to_fetch: | ||
raise ValueError( | ||
"TODO: Switch to paged version (once available) to fetch all markets, we don't know if we aren't creating duplicates now." | ||
) | ||
|
||
markets = get_markets( | ||
100, | ||
market_source, | ||
filter_=( | ||
MarketFilter.closing_this_month | ||
if market_source == MarketSource.MANIFOLD | ||
else MarketFilter.open | ||
), | ||
sort=MarketSort.newest if market_source == MarketSource.MANIFOLD else None, | ||
excluded_questions=set(m.question for m in already_created_markets), | ||
) | ||
markets_sorted = sorted( | ||
markets, | ||
key=lambda m: m.volume, | ||
reverse=True, | ||
) | ||
markets_to_replicate = [ | ||
m | ||
for m in markets_sorted | ||
if close_time_before is None or m.close_time <= close_time_before | ||
] | ||
if not markets_to_replicate: | ||
print(f"No markets found for {market_source}") | ||
return [] | ||
|
||
print(f"Found {len(markets_to_replicate)} markets to replicate.") | ||
|
||
# Get a set of possible categories from existing markets (but created by anyone, not just your agent) | ||
existing_categories = set( | ||
m.category | ||
for m in get_omen_binary_markets( | ||
limit=last_n_omen_markets_to_fetch, | ||
) | ||
) | ||
|
||
created_addresses: list[ChecksumAddress] = [] | ||
|
||
for market in markets_to_replicate: | ||
if not is_predictable(market.question): | ||
print( | ||
f"Skipping `{market.question}` because it seems to not be predictable." | ||
) | ||
continue | ||
# Close a day sooner than the original market, because of timezone differences. | ||
closing_time = market.close_time - timedelta(hours=24) | ||
# Force at least 24 hours of open market. | ||
soonest_allowed_closing_time = datetime.utcnow().replace( | ||
tzinfo=pytz.UTC | ||
) + timedelta(hours=24) | ||
if closing_time <= soonest_allowed_closing_time: | ||
print( | ||
f"Skipping `{market.question}` because it closes sooner than {soonest_allowed_closing_time}." | ||
) | ||
continue | ||
category = infer_category(market.question, existing_categories) | ||
market_address = omen_create_market_tx( | ||
initial_funds=initial_funds, | ||
fee=OMEN_DEFAULT_MARKET_FEE, | ||
question=market.question, | ||
closing_time=closing_time, | ||
category=category, | ||
language="en", | ||
from_address=from_address, | ||
from_private_key=from_private_key, | ||
outcomes=[OMEN_TRUE_OUTCOME, OMEN_FALSE_OUTCOME], | ||
auto_deposit=auto_deposit, | ||
) | ||
created_addresses.append(market_address) | ||
print( | ||
f"Created `https://aiomen.eth.limo/#/{market_address}` for `{market.question}` in category {category} out of {market.url}." | ||
) | ||
|
||
if len(created_addresses) >= n_to_replicate: | ||
print( | ||
f"Replicated {len(created_addresses)} from {market_source}, breaking." | ||
) | ||
break | ||
|
||
return created_addresses |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The omen_replicate_from_tx
function is well-implemented, with clear logic for replicating markets from a specified source to Omen. It correctly handles filtering, categorization, and the creation of new markets based on specified criteria. As previously noted, consider adding a note or TODO comment about the potential need for pagination support when fetching existing markets to avoid duplicates, as this could become a performance bottleneck with a large number of markets.
# TODO: Consider adding pagination support for fetching existing markets to handle large datasets efficiently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (5)
- examples/cloud_deployment/gcp/deploy.py (1 hunks)
- examples/replicate_markets/deploy.py (1 hunks)
- prediction_market_agent_tooling/deploy/agent.py (4 hunks)
- prediction_market_agent_tooling/deploy/constants.py (1 hunks)
- prediction_market_agent_tooling/tools/utils.py (2 hunks)
Files skipped from review as they are similar to previous changes (2)
- examples/replicate_markets/deploy.py
- prediction_market_agent_tooling/deploy/agent.py
Additional comments: 3
prediction_market_agent_tooling/deploy/constants.py (1)
- 3-3: The change from
COMMIT_KEY
toOWNER_KEY
alters the semantic meaning related to repository ownership. Ensure that all references toCOMMIT_KEY
throughout the codebase have been updated toOWNER_KEY
to maintain consistency.Verification successful
The search for
COMMIT_KEY
did not yield any results, indicating that all references to it have likely been updated toOWNER_KEY
as intended or that it was not used elsewhere in the project. This suggests that the change has been correctly implemented across the codebase.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any remaining references to COMMIT_KEY in the project. rg --type py 'COMMIT_KEY'Length of output: 25
prediction_market_agent_tooling/tools/utils.py (1)
- 75-80: The addition of
get_current_git_commit_sha
andget_current_git_branch
functions enhances the tool's capabilities to interact with Git repositories programmatically. These functions are implemented correctly and follow best practices.examples/cloud_deployment/gcp/deploy.py (1)
- 2-3: The enhancements made to the
deploy.py
script, including the addition of imports, new parameters, and the updated agent selection logic, align well with the PR's objectives. These changes improve the script's functionality and configuration management capabilities.Also applies to: 20-22, 26-37, 40-41, 44-54, 56-73, 78-78
from datetime import datetime | ||
from typing import NoReturn, Optional, Type, TypeVar | ||
|
||
import git | ||
import pytz | ||
|
||
T = TypeVar("T") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [61-65]
Using subprocess.run
with shell=True
in the export_requirements_from_toml
function poses a security risk. It's recommended to use shell=False
and provide the command as a list of arguments to mitigate potential vulnerabilities.
- subprocess.run(f"poetry export -f requirements.txt --without-hashes --output {output_file}", shell=True, check=True)
+ subprocess.run(["poetry", "export", "-f", "requirements.txt", "--without-hashes", "--output", output_file], shell=False, check=True)
env_vars=json.loads(env_vars) if env_vars else None, | ||
# You can allow the cloud function to access secrets by adding the role: `gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} --member=serviceAccount:${GCP_SVC_ACC} --role=roles/container.admin`. | ||
# Must be in the format "env_var_in_container => secret_name:version", you can create secrets using `gcloud secrets create --labels owner=<your-name> <secret-name>` command. | ||
secrets=json.loads(secrets) if secrets else None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The handling of env_vars
and secrets
using JSON parsing allows for a more dynamic configuration. However, as previously mentioned, consider adding error handling for JSON parsing to gracefully handle parsing errors and improve robustness.
- env_vars=json.loads(env_vars) if env_vars else None,
+ env_vars=try_parse_json(env_vars),
- secrets=json.loads(secrets) if secrets else None,
+ secrets=try_parse_json(secrets),
+ def try_parse_json(json_str):
+ try:
+ return json.loads(json_str) if json_str else None
+ except json.JSONDecodeError:
+ log.error("Failed to parse JSON string.")
+ return None
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
env_vars=json.loads(env_vars) if env_vars else None, | |
# You can allow the cloud function to access secrets by adding the role: `gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} --member=serviceAccount:${GCP_SVC_ACC} --role=roles/container.admin`. | |
# Must be in the format "env_var_in_container => secret_name:version", you can create secrets using `gcloud secrets create --labels owner=<your-name> <secret-name>` command. | |
secrets=json.loads(secrets) if secrets else None, | |
env_vars=try_parse_json(env_vars), | |
# You can allow the cloud function to access secrets by adding the role: `gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} --member=serviceAccount:${GCP_SVC_ACC} --role=roles/container.admin`. | |
# Must be in the format "env_var_in_container => secret_name:version", you can create secrets using `gcloud secrets create --labels owner=<your-name> <secret-name>` command. | |
secrets=try_parse_json(secrets), | |
def try_parse_json(json_str): | |
try: | |
return json.loads(json_str) if json_str else None | |
except json.JSONDecodeError: | |
log.error("Failed to parse JSON string.") | |
return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No further requested changes, LGTM!!
Summary by CodeRabbit
timeout
andcron_schedule
for improved deployment flexibility.SecretStr
type for API keys to enhance security.DeployedAgent
class.