With the launch of Computation Layer's SkyNet Upgrade just on the horizon, we are glad to announce a pleasant surprise that will
28 Aug 2023, 14:40
With the launch of Computation Layer’s SkyNet Upgrade just on the horizon, we are glad to announce a pleasant surprise that will precede it – unveiling PhoenixLLM, a Large Language Model (#LLM) compute service powered by Phoenix Computation Layer. PhoenixLLM is a #GPT-like AI model compute service with 7 to 33 billion parameters, and is designed to scale on Phoenix’s SkyNet and is customizable with industry-knowledge specific use cases, such as crypto & #Web3, macro research, and technology sector research to name a few.
Some key highlights and differentiators of PhoenixLLM include:
1) Runs and scales on fully decentralized infrastructure, powered by Computation Layer and made highly cost-effective.
2) Model inference (usage of chat features) is free of charge, and requires staking of $PHB instead (a part of our greater Hybrid Staking initiative)
3) SkyNet will support open source LLMs in addition to PhoenixLLM’s general knowledge model and specialized vertical models.
4) Custom model training will be very low cost compared to industry cost benchmarks, and can be enabled via staking.
PhoenixLLM’s General Knowledge Model will be available via Computation Layer shortly for both developers and Phoenix community/ecosystem. Phoenix has begun to work with partners to deploy industry-focused and enterprise LLMs on SkyNet – more details will follow!
Same news in other sources
1Phoenix Global (new)PHB #399
28 Aug 2023, 14:41
With the launch of Computation Layer’s SkyNet Upgrade just on the horizon, we are glad to announce a pleasant surprise that will precede it – unveiling PhoenixLLM, a Large Language Model (LLM) compute service powered by Phoenix Computation Layer. PhoenixLLM is a GPT-like AI model compute service with 7 to 33 billion parameters, and is designed to scale on Phoenix’s SkyNet and is customizable with industry-knowledge specific use cases, such as crypto & Web 3, macro research, and technology sector research to name a few.
Some key highlights and differentiators of PhoenixLLM include:
1) Runs and scales on fully decentralized infrastructure, powered by Computation Layer and made highly cost-effective.
2) Model inference (usage of chat features) is free of charge, and requires staking of PHB instead (a part of our greater Hybrid Staking initiative)
3) SkyNet will support open source LLMs in addition to PhoenixLLM’s general knowledge model and specialized vertical models.
4) Custom model training will be very low cost compared to industry cost benchmarks, and can be enabled via staking.
PhoenixLLM’s General Knowledge Model will be available via Computation Layer shortly for both developers and Phoenix community/ecosystem. Phoenix has begun to work with partners to deploy industry-focused and enterprise LLMs on SkyNet – more details will follow!
Like, retweet, and share to help spread the wings of the Phoenix
With the launch of Computation Layer's SkyNet Upgrade just on the horizon, we are glad to announce a pleasant surprise that will
With the launch of Computation Layer’s SkyNet Upgrade just on the horizon, we are glad to announce a pleasant surprise that will precede it – unveiling PhoenixLLM, a Large Language Model (LLM) compute service powered by Phoenix Computation Layer. PhoenixLLM is a GPT-like AI model compute service with 7 to 33 billion parameters, and is designed to scale on Phoenix’s SkyNet and is customizable with industry-knowledge specific use cases, such as crypto & Web 3, macro research, and technology sector research to name a few.
Some key highlights and differentiators of PhoenixLLM include:
1) Runs and scales on fully decentralized infrastructure, powered by Computation Layer and made highly cost-effective.
2) Model inference (usage of chat features) is free of charge, and requires staking of PHB instead (a part of our greater Hybrid Staking initiative)
3) SkyNet will support open source LLMs in addition to PhoenixLLM’s general knowledge model and specialized vertical models.
4) Custom model training will be very low cost compared to industry cost benchmarks, and can be enabled via staking.
PhoenixLLM’s General Knowledge Model will be available via Computation Layer shortly for both developers and Phoenix community/ecosystem. Phoenix has begun to work with partners to deploy industry-focused and enterprise LLMs on SkyNet – more details will follow!
Like, retweet, and share to help spread the wings of the Phoenix https://twitter.com/Phoenix_Chain/status/1696171016928559165?s=20