There are several types of tasks, and each has its own general type of workflow. Each task is also given a "story point", which is an approximate estimation of the amount of effort and time required to complete the task. However, the time required varies depending on the amount of context the assignee has.
All tasks share at least the following workflow:
- Groom the task. This means to determine whether it's appropriate for the sprint, who should take it, who should review it, and how many story points it probably requires.
- Do the task. How each task is generally "done" depends on its type; see more in the sections below.
- Review the task (internal/upstream). How one reviews a task will tend to mirror how one does it. For example, discovery tasks generally don't require writing test code, so a review would also generally not require testing anything step by step, as it would be the case for a general task.
There are eight columns on the sprint board which a ticket will move through as the task is completed. These are detailed under Task statuses.
|1 point||Trivial task: We know the codebase, there's little or no code to write, there's little or no need for code review, there's no back-and-forth interaction with outside teams, and no deployments.
Example: Account Verification emails not sending for HUIT instance (OC-2780)
|2 points||Small task: The change required is simple, or the reason for the bug is evident.
There may be a deployment or we may need to interact with an external team (e.g. edX code review), but those are expected to go quickly and smoothly. There will be perhaps one new unit test.
Example: UI issue in Course Info Overlay (OC-2792)
|3 points||Moderate task: some uncertainty so investigation required; will involve some new code, maybe a couple new tests, and an external review.
Example: Send email to unregistered students who request password reset (OC-2857)
|5 points||The change is significant and would probably take more than a day; or, the change seems small, but most people on our team are not familiar with this codebase and some learning or interaction with an external team will definitely be required. The code will require quite a few new tests.
Example: Shared RabbitMQ support for OpenCraft IM (OC-1719)
|8 points||The task would take anyone a significant amount of time to implement, and then will require a careful review, which will likely involve back-and-forth and more coding. There is significant risk and/or novelty. We will need to interact with external teams. The upstream review is likely to come back with changes. The code will need several types of test, including Selenium integration tests. The best approach to take is not yet known or may be controversial.
Example: Display Course Information on Landing page (OC-2617)
|13 points||The task is very large, novel, which should be split up across sprints as necessary, may require discovery at multiple steps in the implementation, will require a significant amount of test code and careful manual testing, both internally and from other teams, and is generally very experimental, even for non-technical stakeholders.
Example: Build Programs Landing Page (OC-2963)
Discovery tasks are timeboxed, except for specific exceptions, so general time approximations don't apply.
|1 point||Trivial discovery: We've done an estimation/discovery very similar before (possibly for the same client), the discovery involves material that'd be familiar to any one of us, and a discovery document most likely isn't required.
Example: Estimate PHP upgrade (OC-2881)
|2 points||Small discovery: The discovery is relatively simple, or we already have a pretty good idea of what the estimations will be. A discovery document most likely isn't required, and most communication regarding it can happen right on the ticket.
Example: Ooyala Closed Captions (OC-1899)
|3 points||Moderate discovery: some uncertainty so the discovery will require a careful review. Likely that a discovery document is required.
Example: Ginkgo Upgrade (OC-2762)
|5 points||The discovery will require a deep dive and involves several different moving pieces to consider. There's plenty of uncertainty so a lot of investigation and a careful review will be required. A discovery document (possibly multiple) is required to effectively communicate the results.
Example: Cross-cloud Analytics for Cloudera (OC-2109)
|8 points||A very open & novel discovery that has all the traits of a 5-point discovery, but also will require contact with other teams, a detailed discovery document, diagrams, possibly even slides/videos to effectively communicate with external stakeholders.
Example: OpenStack Production Support (OC-1046)
A prototype task is just like a general task, except the code does not have to include tests, meet all a11y/i18n requirements, nor be merged. The code should still be able to demonstrate the new functionality though. Usually after the prototype is complete, the task assignee will create one or more additional stories for the completion of the work.
|1 point||Trivial prototype: We're very familiar with what needs to be changed or added to get the prototype ready, no tests should be necessary, and it should take very little effort to get it ready.
Example: "Prototype: Client requests site's theme to be a light blue"
|2 points||Small task: The prototype is pretty simple or straightforward, and is not considerably novel. The additions required will not be significant, and writing tests will most likely not be necessary.
Example: Auto-advance to next unit after a video (OC-2594)
|3 points||Moderate task: some uncertainty so investigation required; may possibly require tests.
Example: Course Blocks API student_view_data for step builder (OC-2809)
|5 points||A pretty large prototype for which the solution or preferred path is not obvious and some fresh decision making will have to be done. The prototype requires significant domain knowledge.
Example: Open edX on OpenStack Continuous Integration (CI) (OC-2167)
|8 points||The prototype is large, novel, and will require a detailed discovery that'll need to be carefully communicated with the reviewer. A lot of domain knowledge will be required, and some may even need to be invented.
Example: DiscussionXBlock Prototype (OC-1630)
We mark introductory-level tasks as Newcomer Friendly to help newcomers identify tasks that would be suitable for them to pick up during their trial period at OpenCraft.
To mark a task as newcomer-friendly so that its tag is visible in the sprint backlog, we have to set it in the hidden "Fix Version" field and not the "Affects Version" field, which requires a special process in JIRA:
- Open your cell's sprint board, and go to the Backlog view.
- Open the "Versions" panel on the left side of the backlog.
- Drag the issue you want to tag as Newcomer Friendly onto the "Newcomer Friendly" version for your cell.
These tasks should also contain at least the following information:
*Affected code* Provide a link to the repository and target branches for each PR you expect this ticket to produce. * https://github.com/ORG/REPO/tree/BRANCH * ... *Preparation* Indicate what devstack setup or other preparatory steps are required to complete the task. *Acceptance criteria* List these in the description, or as checklist items on the task if you prefer. *Related tasks* Link to any discovery or preliminary tasks, or to the epic for more context on this work. *Estimate* Ensure that the Original Estimate and Remaining Estimate fields are filled in. If there is a timebox, ensure that this is clearly stated in the ticket description. (optional) Newcomers: please log time in excess of the task estimate on your onboarding task.
Please see below for the ticket statuses we use in our JIRA tracker, and what they each tend to represent.
Most tasks start here.
The tasks at the top of the backlog are the ones with the highest priority. Epic owners are usually the ones who create and prioritize the tickets according to the client's needs; except during discovery tasks for large epics, when the person who does the discovery will create the tasks. In general, if you have an idea for a new task, discuss it either:
- With the epic owner of a related epic (who might have time budget available to work on it),
- With your cell (which could allocate some non-billed time, depending on its priorities),
- On the forum if this is meant to be a larger initiative.
This is the list of tasks assigned to individuals and expected to be finished by the end of the sprint.
Once you start working on a task, you should move it to the next column.
Tasks that are currently being worked on. In this column, the next steps are the assignee's. Use this column if the task needs any further work, including if the code reviews are done and the code is ready to be merged/deployed.
While you work on the task, log the time you worked on the task progressively ("Log work"). You can also push to your remote branch regularly.
Once you are finished, create a Pull Request and move the task to the next column. Add the Pull Request to the task by clicking More > Link > Web Link, and pasting the URL of the Pull Request. (If you have many PRs, add them to the description or use a checklist, so that you can indicate the status of each PR by marking them with a check or strikethrough as they are merged, which cannot be done when using a simple link. See OC-1064 as an example.) Also, be sure to notify the person who will review your work. (Mentioning their username in a comment on the task is usually enough.)
This column is for tasks that are waiting for a review from other OpenCraft developer(s), or where the next step is to be done by the reviewer.
The reviewer will look over the code and test it, leaving feedback on the pull request. If the code needs work, the reviewer should move it back to "In Progress." Once all of the reviewers' concerns have been addressed, the reviewer should give a ":+1:" (thumbs up) comment to indicate that they've approved the PR. The task can then proceed back to "In Progress" for the assignee to either merge it or ask upstream (e.g. edX) to do a second review.
Note: If the corrections to be made are fairly trivial, the reviewer should give a conditional +1, e.g. say "+1 if you fix the minor issues a and b." That way, the author won't be blocked waiting for a trivial follow-up review.
This column is for when the next steps need to be done by someone outside of OpenCraft (e.g. we're waiting for a review from edX), and they have been pinged. This column can also indicate that the ticket is blocked by another one - in that case, the blocking ticket should be "linked" as a blocker in Jira.
The assignee should carefully follow the progress of the external review or blocker. If no progress is made for some time, the assignee should send a polite reminder to the external person/people we are waiting for.
If the one who does the merge is from within the OpenCraft team, he can move the task forward. Otherwise, you should regularly check if the PR is merged, and move it yourself.
If you (the assignee) do not expect any progress on the ticket in the upcoming sprint, you should move the ticket to the "Long External Review/Blocked" sprint, so that it won't be cluttering up the sprint board and so that your commitments for the upcoming sprint are more certain.
All PRs from the task (including upstream PRs) have been merged. The assignee is now responsible for deploying the code and notifying the client that the work has been done ("delivering").
Deployed & Delivered
Once all PRs have been merged, and the code has been deployed, and the client has been notified that the work is done (including updating the client's Jira or Trello tickets if applicable), the assignee should move the ticket to Deployed & Delivered.
This column indicates that the task is ready for the sprint manager to check and close it.
Important: All tasks should get to "Deployed & Delivered" (or "Done") or be in "External Review" before the end of the sprint. Any other status is considered a spillover, which is important to avoid.
This column indicates that the ticket has been reviewed by the cell's Sprint Manager, who will double-check the following criteria before moving the ticket to this column:
- All code which could be upstreamed has been upstreamed, or was developed as a plugin using a stable public API.
- All pull requests are merged.
- The code has been deployed.
- The client has been notified, and the corresponding ticket on their Trello/Jira board (if any) has been updated.
- Note: The client does not need to have signed off on the work before we consider it done. If the client finds a bug, simply move the ticket back to "In Progress" if it's still in the same sprint, or create a bugfix ticket if the sprint is over. If the client requests additional changes/features, create a new ticket for the next sprint.
Recurring tasks are part of every sprint.
These are tasks that happen on a regular basis. When you see a task in the "Recurring" column, then you expect to see work on this task and a time budget allocated for it in every sprint. These kinds of tasks are often helpful in cases like mentoring new joiners, team meetings, etc.
Just like regular tasks, recurring tasks start in the "Backlog" and JIRA puts them in "This week" after they get pulled into a sprint for the first time. The assignee of a recurring task should move the task from "This week" straight to "Recurring" at the beginning of the first sprint that includes the task. If the work belonging to a recurring task ends (e.g. because we finished the project that it belongs to), the assignee should move it to "Merged" before the end of the current sprint.
Long External Review/Blocked
Tasks in this category are not part of the current sprint.
This category is for tasks that are waiting for an external review or some other external requirement, and are not expected to be unblocked before the end of the current sprint. If any tasks in this category are assigned to you, you should review them once a week to see if you need to ping the external party to remind them to review/unblock it, or if you are ready to pull it back into a sprint.