@casual-simulation/aux-records
Advanced tools
Changelog
V3.0.10
Improved the Records API to be able to return errors to allowed HTTP origins.
Improved os.meetCommand()
to return a promise.
Added the ability to specify an options object with os.recordData(key, address, data, options)
that can specify update and delete policies for the data.
These policies can be useful to restrict the set of users that can manipulate the recorded data.
options
is an object with the following properties:
let options: {
/**
* The HTTP Endpoint that should be queried.
*/
endpoint?: string;
/**
* The policy that should be used for updating the record.
* - true indicates that the value can be updated by anyone.
* - An array of strings indicates the list of user IDs that are allowed to update the data.
*/
updatePolicy?: true | string[];
/**
* The policy that should be used for deleting the record.
* - true indicates that the value can be erased by anyone.
* - An array of strings indicates the list of user IDs that are allowed to delete the data.
* Note that even if a delete policy is used, the owner of the record can still erase any data in the record.
*/
deletePolicy?: true | string[];
};
Added the os.tip(message, pixelX?, pixelY?, duration?)
and os.hideTips(tipIDs?)
functions to make showing tooltips easy.
os.tip(message, pixelX?, pixelY?, duration?)
can be used to show a tooltip and takes the following parameters:
message
is the message that should be shown.pixelX
is optional and is the horizontal pixel position on the screen that the message should be shown at.
If omitted, then the tooltip will be shown at the current mouse position or the last touch position.
Additionally, omitting the position will cause the tooltip to only be shown when the mouse is near it.
Moving the mouse away from the tooltip in this mode will cause the tooltip to be automatically hidden.pixelY
is optional and is the vertical pixel position on the screen that the message should be shown at.
If omitted, then the tooltip will be shown at the current mouse position or the last touch position.
Additionally, omitting the position will cause the tooltip to only be shown when the mouse is near it.
Moving the mouse away from the tooltip in this mode will cause the tooltip to be automatically hidden.duration
is optional and is the number of seconds that the toast should be visible for.os.hideTips(tipIDs?)
can be used to hide a tooltip and takes the following parameters:
tipIDs
is optional and is the ID or array of IDs of tooltips that should be hidden. If omitted, then all tooltips will be hidden.Improved the menuPortal to use 60% of the screen width on large screens when the screen is taller than it is wide.
Improved the systemPortal to support system
tag values that are set to non-string values such as booleans and integers.
Added WebXR hand tracking support.
endpoint
parameter.Changelog
V3.0.5
endpoint
parameter:
os.recordData(key, address, data, endpoint?)
os.getData(recordName, address, endpoint?)
os.listData(recordName, startingAddress?, endpoint?)
os.eraseData(key, address, endpoint?)
os.recordManualApprovalData(key, address, data, endpoint?)
os.getManualApprovalData(recordName, address, endpoint?)
os.listManualApprovalData(recordName, startingAddress?, endpoint?)
os.eraseManualApprovalData(key, address, endpoint?)
os.recordFile(key, data, options?, endpoint?)
os.eraseFile(key, url, endpoint?)
os.recordEvent(key, eventName, endpoint?)
os.countEvents(recordName, eventName, endpoint?)
os.getSubjectlessPublicRecordKey(recordName)
function to make it possible to create a record key that allow publishing record data without being logged in.
os.meetFunction(functionName, ...args)
function to allow querying the current meet portal meeting state.
functionName
is the name of the function that should be triggered from the Jitsi Meet API.args
is the list of arguments that should be provided to the function.@onMeetEntered
and @onMeetExited
shouts which are triggered whenever the current user starts/stops participating in a meet.
@onMeetLoaded
, @onMeetEntered
is only triggered after the user clicks the "Join" button from the meeting waiting room.meetPortalJWT
tag to the meetPortalBot to allow using JSON Web Tokens for authenticating moderators in meetings.
botPortal
tag that when set to a bot ID on the configBot
will show the JSON data for that bot.
botPortalAnchorPoint
and botPortalStyle
tags can be set on the botPortalBot
similarly to how meetPortalAnchorPoint
can be set on the meetPortalBot
.systemTagName
tag that, when set on the config bot, specifies the tag that should be used when finding bots to include in the systemPortal.
systemTagName
to "test"
will cause the systemPortal to search for bots that have a test
tag instead of a system
tag.globalThis
would cause an error to occur.os.replaceDragBot()
with bots that contained an array in its tags would cause an error.formAddress
would not automatically play on Chrome web browsers.Changelog
V3.0.0
Added the os.openImageClassifier(options)
and os.closeImageClassifier()
functions.
@onClick
tag and put the following code in it (replacing MY_MODEL_URL
with the shareable link):
await os.openImageClassifier({
modelUrl: 'MY_MODEL_URL',
});
options
is an object with the following properties:
modelUrl
- The sharable link that was generated from Teachable Machine.modelJsonUrl
- Is optional and can be used in advanced scenarios where you want to control where the model is stored.modelMetadataUrl
- Is optional and can be used in advanced scenarios where you want to control where the model is stored.cameraType
- Is optional and is the type of camera that should be preferred. Can be "front" or "rear".Created the oai-1
appBundle.
This appBundle is currently a simple ab that can query the OpenAI GPT-3 API via a shout.
The ab has the following features:
A single manager bot in the oai-1
dimension and systemPortal as oai-1.manager
.
@generateTextResponse
is a listener that asks GPT-3 to respond to a given text prompt.
It takes the following parameters:
apiKey
- The API key that should be used to access the API. You can get an API key at https://beta.openai.com/overview.prompt
- The text that the AI should respond to. An example is "Write a tagline for an ice cream shop.". Also see this guide: https://beta.openai.com/docs/guides/completion.engine
- The engine that should be used to process the prompt. Defaults to "text-davinci-001"
if not specified. You can find a list of engines is available here: https://beta.openai.com/docs/engines.options
- An object that contains additional options for the request. You can find the documentation for these options here: https://beta.openai.com/docs/api-reference/completions/create.It returns a promise that contains a list of generated choices.
Example:
let oai = getBot('system', 'oai-1.manager');
const response = await oai.generateTextResponse({
apiKey: 'myAPIKey',
prompt: 'Write a tagline for an ice cream shop.',
});
if (response.choices.length > 0) {
os.toast('Best choice: ' + response.choices[0]);
} else {
os.toast('No choices.');
}
os.listData()
where it was impossible to list data items unless a starting address was provided.