Posts Tagged ‘AWS’

Slack and AWS Team-Up to Drive Agility in Software Development

Friday, July 10th, 2020

Why? Because new shared initiatives make it easier and engaging for teams to manage their AWS resources in slack. AWS (Amazon Web Services) and Slack Technologies declared an extended agreement to deliver innovative solutions for intensifying the industry workforce collaboration and communication. The goal is to help distributed development teams communicate and become extra agile in maintaining their AWS resources from inside Slack.

How?

Slack will relocate its comprehensive voice and video calling capabilities to Amazon Chime (a communication service by AWS that enables users to chat, place business calls, and meet). Besides, Slack leverages AWS’s global infrastructure to promote the rapid adoption of its platform by business customers and to proffer data residency to them by which they can independently choose the country/region in which their data is stored at rest while satisfying the compliance necessities. Slack will continue to rely on AWS as its favored cloud provider to add innovative and collaborative features whereas, AWS will adopt Slack organization-wide to streamline and strengthen team communication.

Stewart Butterfield, the Cofounder, and CEO of Slack said that “The future of business software will be driven by the unification of cloud services and workstream collaboration tools. Moreover, the strategic partnering with AWS allows both companies to scale, satisfy demand, and deliver enterprise-grade offerings to our customers. By integrating AWS services with Slack’s channel-based messaging platform, we’re helping teams easily and seamlessly manage their cloud infrastructure projects and launch cloud-based services without ever omitting slack”.

The Integrations

Slack and AWS will also elongate product integration and intensify interoperability to help the teams in managing their AWS resources in Slack channels and Amazon Chime chat rooms with more prominent flexibility. Let us have a look at some of the integrations:

Amazon AppFlow Integration

It allows users to transfer data between the Slack and AWS services rapidly and securely. Need to run data flows regularly? Schedule them in advance or trigger them with specific business events with the help of Appflow Integration. In the coming months, AWS and Slack will enhance this capability, empowering users to transfer data bi-directionally between multiple Slack channels and AWS services in a single flow.

AWS Chatbot Integration

AWS Chatbot, an interactive agent, enables development teams to monitor and use AWS resources where they are already working– Slack. DevOps can execute AWS operational activities that incorporate monitoring, system management, and deployment workflows, all inside Slack. Besides, AWS Chatbot is already in use by several other teams across the globe to enhance the application development process. In the coming years, AWS Chatbot service will consolidate 175+ services of AWS to give developers the ability to collaborate with their teams and help them maintain their cloud-based services without dropping Slack.

AWS Key Management Service with Slack Enterprise Key Management (EKM)

EKM enables its customers to utilize their keys stored in the AKMS (Amazon Key Management Service) to encrypt essential files and messages. Slack leverages AWS’s security services like the AWS Key Management Service for distribution and control of cryptographic keys. Invented for security-conscious or regulated company customers seeking improved visibility and control over their data in Slack, over 95+ firms are now utilizing the solution to handle their encryption keys.

Amazon Chime

To ensure an upgraded and comfortable calling experience, Slack Calls is migrating to Amazon Chime’s voice and video calling infrastructure. Slack will leverage AWS’s proven infrastructure to deliver excellent and reliable user experiences. Soon, AWS will power audio, video, conferences, and screen-sharing capabilities in native Slack Calls. Besides, the transition will allow adding new features, such as mobile video, so users can continue to rely on Slack for secure industry communication.

Working Together to Unlock Enterprise Innovation

“Collectively, AWS and Slack are providing development teams the ability to collaborate and innovate faster on the front end with applications and the ability to manage their backend cloud infrastructure efficiently. We look forward to operating with Slack to increase the ways we can help our customers innovate in the world of cloud,” said Andy Jassy, CEO of AWS

Leverage AWS IoT Core for Connecting Devices to the Cloud

Tuesday, June 16th, 2020

Technologies are consistently evolving with innovative enhancements to them every day. Connecting your devices to the cloud can be a complex situation and requires a skilled cloud app development company to get the best results. Also, managing several internet-connected devices, security measures, and reliability simultaneously can be a tedious task. 

To overcome this burden, a fully managed cloud service “AWS IoT Core” is introduced. The organizations can now connect their devices to the AWS cloud for improved security, interoperability, and clarity. Besides, the AWS IoT Core offers a centralized platform that promotes secure data storage, convenience across a variety of devices, and retrieval.

With AWS IoT Core, your application can be tracked and communicated with all the connected devices, 24*7, even when they are offline. It is easy to use AWS and Amazon Services with AWS IoT Core to create IoT apps that collect, process, examine and carry out the information generated by the connected devices without the need of managing any infrastructure. These apps can also be centrally managed over a mobile app.

How does AWS IoT Core Operate?

Connect and Manage Your Devices

AWS IoT Core allows seamless connectivity of multiple devices to the cloud and to other devices. It supports HTTP, WebSockets, and MQTT (Message Queuing Telemetry Transport), a communication protocol particularly created to support irregular and interrupted connections, lessen the code footprints on the devices and decrease the network bandwidth necessities. Besides, AWS IoT Core supports industry standards and custom protocols also devices using different protocols can intercommunicate.

Secured Device Connections and Information

Whenever a device gets connected to an AWS IoT Core, an end-to-end encryption is initiated throughout all the connection links so that the crucial data is never transferred between devices and AWS IoT core without having a proven identity. You can always authenticate access to your devices and apps by using granular permissions and policies. All thanks to the automated configuration and authentication policies provided by the AWS IoT core.

Process and Act upon Device Data

You can refine, modify, and act upon the device data depending upon the business rules you have defined. Also, you can update the set business rules anytime to implement new device and app features.

Read and Set Device State Anytime

The latest state of a connected device is stored within the AWS IoT core so that it can be set or read anywhere, anytime, even when the device is disconnected.

Key Features of AWS IoT Core

Below are the unique and robust AWS IoT Core features that provide a seamless experience to organizations while connecting to several IoT devices to the cloud:

Alexa Voice Service (AVS) Support

You can easily utilize the AVS for a regular management of your devices having Alexa built-in abilities i.e. microphone and speaker. With the AVS integration, it is quite easy to scale a huge amount of supported devices and their management can be done through voice controls. It reduces the cost of building Alexa Built-in devices by up to 50%.  Besides, AVS integration promotes seamless media handling for the connected devices in a virtual cloud environment.

Device Shadow

You can create a determined, virtual version or Device Shadow of every device connected to an AWS IoT core. It is a virtual representation of every device by which you can virtually analyze a device’s real-time state w.r.t applications and other devices interacting with it. It also lets you recover the last reported state of each device connected to the AWS cloud. Besides, the Device Shadow provides REST APIs that make it more convenient to create interactive applications.

Rules Engine

The Rules Engine empowers you to create a scalable and robust application that exchanges, processes the data generated by the connected devices. It prevents you from managing the complex and daunting software infrastructures. Moreover, it evaluates and modifies the messages published under the AWS IoT Core and delivers them to another device or cloud service.

Authentication and Authorization

AWS IoT Core provides industry level security for the connected devices as it allows mutual authentication and peer-to-peer encryption at every connection point. This means that the data is only transferred between the devices that have a valid and proven identity on AWS IoT Core. There are majorly three types of authentication mechanism:

  • X.509 Certificate-Based Authentication
  • Token-Based Authentication
  • SigV4

Devices connected using HTTP can use either of the above-mentioned authentication mechanisms whereas devices connected through MQTT can use certificate-based authentication.

AWS IoT and Mobile SDKs

The AWS IoT Device SDK allows you to connect your hardware device or your application to AWS IoT Core instantly and efficiently. It enables your devices to connect, validate, and exchange messages with AWS IoT Core incorporating the web protocols like MQTT, HTTP, or WebSockets. Moreover, developers can either use an open-source AWS SDK or can create their SDK to support their IoT devices.

The Bottom Line

AWS IoT Core empowers people and businesses to connect their devices to the cloud. It provides great assistance for web protocols like WebSockets, MQTT, and HTTP to facilitate seamless connectivity with the least bandwidth disruptions. Also, AWS IoT Core promotes smooth and effective communication between the connected devices.

Connecting GraphQL using Apollo Server

Thursday, January 23rd, 2020

Introduction

Apollo Server is a library that helps you connect a GraphQL schema to an HTTP server in Node.js. We will try to explain this through an example, the link used to clone this project is mentioned below:-

git clone https://[email protected]/prwl/apollo-tutorial.git

This technology and its concepts can be best explained as below.

Challenge

Here, one of the main goals is to create a directory and install packages. This will eventually lead us to implement our first subscription in GraphQL with Apollo Server and PubSub.

Solution

For this, the first step includes building a new folder in your working directory. The current directory is changed to that new folder, and a new folder is created to hold your server code in and run. This will create the package.json file for us. After this, we install a few libraries. After the installment of these packages, the next step is to create an index.js file in the root of the server.

Create Directory

npm init -y

Install Packages

npm install apollo-server-express express graphql nodemon apollo-server

Connecting Apollo Server

Index.js first connects to the Apollo server. Every library is set to get started with the source code in the index.js file. To achieve this, you have first to import the necessary parts for getting started with Apollo Server in Express. Using Apollo Server’s applyMiddleware() method, you can opt-in any middleware, which in this case is Express.

import express from 'express';
import { ApolloServer, gql } from 'apollo-server-express';

const typeDefs = gql`
type Query {
hello: String
};
const resolvers = {
Query: {
hello: () => 'Hello World!'
}
}
`;
const server = new ApolloServer({ typeDefs, resolvers });
const app = express();
server.applyMiddleware({ app });

app.listen({ port: 4000 }, () =>
console.log(`? Server ready at http://localhost:4000${server.graphqlPath}`)
);

The GraphQL schema provided to the Apollo Server is the only available data for reading and writing data via GraphQL. It can happen from any client who consumes the GraphQL API. The schema consists of type definitions, which starts with a mandatory top-level Query type for reading data, followed by fields and nested fields. Apollo Server has various scalar types in the GraphQL specification for defining strings (String), booleans (Boolean), integers (Int), and more.

const typeDefs = gql`
type Query {
hello: Message
}Type Message {salutation: String}
`;
const resolvers = {
Query: {
hello: () => 'Hello World!'
}
};

In the GraphQL schema for setting up an Apollo Server, resolvers are used to return data for fields from the schema. The data source doesn’t matter, because the data can be hardcoded, can come from a database, or from another (RESTful) API endpoint.

Mutations

So far, we have only defined queries in our GraphQL schema. Apart from the Query type, there are also Mutation and Subscription types. There, you can group all your GraphQL operations for writing data instead of reading it.

const typeDefs = gql`
type Query {

}type Mutation {createMessage(text: String!): String!}
`;

As visible from the above code snippet. In this case, the create message mutation accepts a non-nullable text input as an argument and returns the created message as a string.

Again, you have to implement the resolver as counterpart for the mutation the same as with the previous queries, which happens in the mutation part of the resolver map:

const resolvers = {
Query: {
hello: () => ‘Hello World!’
},
Mutation: {
createMessage: (parent, args) => {
const message = args.text;
return message;
},
},
};

The mutation’s resolver has access to the text in its second argument. The parent argument isn’t used.

So far, the mutation creates a message string and returns it to the API. However, most mutations have side-effects, because they are writing data to your data source or performing another action. Most often, it will be a write operation to your database, but in this case, we are just returning the text passed to us as an argument.

That’s it for the first mutation. You can try it right now in GraphQL Playground:

mutation {
createMessage (text: “Hello GraphQL!”)
}

The result for the query should look like this as per your defined sample data:

{
“data”: {
“createMessage”: “Hello GraphQL!”
}
}

Subscriptions

So far, you used GraphQL to read and write data with queries and mutations. These are the two essential GraphQL operations to get a GraphQL server ready for CRUD operations. Next, you will learn about GraphQL Subscriptions for real-time communication between GraphQL client and server.

Apollo Server Subscription Setup

Because we are using Express as middleware, expose the subscriptions with an advanced HTTP server setup in the index.js file:

import http from ‘http’;…server.applyMiddleware({ app, path: ‘/graphql’ });const httpServer = http.createServer(app);
server.installSubscriptionHandlers(httpServer);httpServer.listen({ port: 8000 }, () => {
 console.lo;
});…

To complete the subscription setup, you’ll need to use one of the available PubSub engines for publishing and subscribing to events. Apollo Server comes with its own by default.

Let’s implement the specific subscription for the message creation. It should be possible for another GraphQL client to listen to message creations.

Create a file named subscription.js in the root directory of your project and paste the following line in that file:

import { PubSub } from ‘apollo-server’;export const CREATED = ‘CREATED’;export const EVENTS = {
MESSAGE: CREATED,
};export default new PubSub();

The only piece missing is using the event and the PubSub instance in your resolver.

…import pubsub, { EVENTS } from ‘./subscription’;…const resolvers = {
Query: {

},
Mutation: {…
},Subscription: {messageCreated: {subscribe: () => pubsub.asyncIterator(EVENTS.MESSAGE),},},};…

Also, update your schema for the newly created Subscription:

const typeDefs = gql`
type Query {

}
type Mutation {

}type Subscription {messageCreated: String!}
`;

The subscription as a resolver provides a counterpart for the subscription in the message schema. However, since it uses a publisher-subscriber mechanism (PubSub) for events, you have only implemented the subscribing, not the publishing. It is possible for a GraphQL client to listen for changes, but there are no changes published yet. The best place for publishing a newly created message is in the same file as the created message:

…import pubsub, { EVENTS } from ‘./subscription’;…const resolvers = {
Query: {

},
Mutation: {
createMessage: (parent, args) => {
const message = args.text;pubsub.publish(EVENTS.MESSAGE, {messageCreated: message,});
return message;
},
},
Subscription: {

},
};…

We have implemented your first subscription in GraphQL with Apollo Server and PubSub. To test it, create a new message on a tab in the apollo playground. On the other tab, we can listen to our subscription.

In the first tab, execute the subscription:

subscription {
messageCreated
}

In the second tab execute the createMessage mutation:

mutation {
createMessage(text: “My name is John.”)
}

Now, check the first tab(subscription) for the response like this:

{
“data”: {
“messageCreated”: “My name is John.”
}
}

We have implemented GraphQL subscriptions.

Recent Posts

Recent Comments

Get In Touch

Ask Us Anything !

Do you have experience in building apps and software?

What technologies do you use to develop apps and software?

How do you guys handle off-shore projects?

What about post delivery support?